gnuastro (0.22)

(root)/
share/
info/
gnuastro.info-8
This is gnuastro.info, produced by makeinfo version 7.1 from
gnuastro.texi.

This book documents version 0.22 of the GNU Astronomy Utilities
(Gnuastro).  Gnuastro provides various programs and libraries for
astronomical data manipulation and analysis.

   Copyright © 2015-2024 Free Software Foundation, Inc.

     Permission is granted to copy, distribute and/or modify this
     document under the terms of the GNU Free Documentation License,
     Version 1.3 or any later version published by the Free Software
     Foundation; with no Invariant Sections, no Front-Cover Texts, and
     no Back-Cover Texts.  A copy of the license is included in the
     section entitled "GNU Free Documentation License".
INFO-DIR-SECTION Astronomy
START-INFO-DIR-ENTRY
* Gnuastro: (gnuastro).       GNU Astronomy Utilities.
* libgnuastro: (gnuastro)Gnuastro library. Full Gnuastro library doc.

* help-gnuastro: (gnuastro)help-gnuastro mailing list. Getting help.

* bug-gnuastro: (gnuastro)Report a bug. How to report bugs

* Arithmetic: (gnuastro)Arithmetic. Arithmetic operations on pixels.
* astarithmetic: (gnuastro)Invoking astarithmetic. Options to Arithmetic.

* BuildProgram: (gnuastro)BuildProgram. Compile and run programs using Gnuastro's library.
* astbuildprog: (gnuastro)Invoking astbuildprog. Options to BuildProgram.

* ConvertType: (gnuastro)ConvertType. Convert different file types.
* astconvertt: (gnuastro)Invoking astconvertt. Options to ConvertType.

* Convolve: (gnuastro)Convolve. Convolve an input file with kernel.
* astconvolve: (gnuastro)Invoking astconvolve. Options to Convolve.

* CosmicCalculator: (gnuastro)CosmicCalculator. For cosmological params.
* astcosmiccal: (gnuastro)Invoking astcosmiccal. Options to CosmicCalculator.

* Crop: (gnuastro)Crop. Crop region(s) from image(s).
* astcrop: (gnuastro)Invoking astcrop. Options to Crop.

* Fits: (gnuastro)Fits. View and manipulate FITS extensions and keywords.
* astfits: (gnuastro)Invoking astfits. Options to Fits.

* MakeCatalog: (gnuastro)MakeCatalog. Make a catalog from labeled image.
* astmkcatalog: (gnuastro)Invoking astmkcatalog. Options to MakeCatalog.

* MakeProfiles: (gnuastro)MakeProfiles. Make mock profiles.
* astmkprof: (gnuastro)Invoking astmkprof. Options to MakeProfiles.

* Match: (gnuastro)Match. Match two separate catalogs.
* astmatch: (gnuastro)Invoking astmatch. Options to Match.

* NoiseChisel: (gnuastro)NoiseChisel. Detect signal in noise.
* astnoisechisel: (gnuastro)Invoking astnoisechisel. Options to NoiseChisel.

* Segment: (gnuastro)Segment. Segment detections based on signal structure.
* astsegment: (gnuastro)Invoking astsegment. Options to Segment.

* Query: (gnuastro)Query. Access remote databases for downloading data.
* astquery: (gnuastro)Invoking astquery. Options to Query.

* Statistics: (gnuastro)Statistics. Get image Statistics.
* aststatistics: (gnuastro)Invoking aststatistics. Options to Statistics.

* Table: (gnuastro)Table. Read and write FITS binary or ASCII tables.
* asttable: (gnuastro)Invoking asttable. Options to Table.

* Warp: (gnuastro)Warp. Warp a dataset to a new grid.
* astwarp: (gnuastro)Invoking astwarp. Options to Warp.

* astscript: (gnuastro)Installed scripts. Gnuastro's installed scripts.
* astscript-ds9-region: (gnuastro)Invoking astscript-ds9-region. Options to this script
* astscript-fits-view: (gnuastro)Invoking astscript-fits-view. Options to this script
* astscript-pointing-simulate: (gnuastro)Invoking astscript-pointing-simulate. Options to this script
* astscript-psf-scale-factor: (gnuastro)Invoking astscript-psf-scale-factor. Options to this script
* astscript-psf-select-stars: (gnuastro)Invoking astscript-psf-select-stars. Options to this script
* astscript-psf-stamp: (gnuastro)Invoking astscript-psf-stamp. Options to this script
* astscript-psf-subtract: (gnuastro)Invoking astscript-psf-subtract. Options to this script
* astscript-psf-unite: (gnuastro)Invoking astscript-psf-unite. Options to this script
* astscript-radial-profile: (gnuastro)Invoking astscript-radial-profile. Options to this script
* astscript-sort-by-night: (gnuastro)Invoking astscript-sort-by-night. Options to this script
* astscript-zeropoint: (gnuastro)Invoking astscript-zeropoint. Options to this script
END-INFO-DIR-ENTRY


File: gnuastro.info,  Node: MakeProfiles log file,  Prev: MakeProfiles output dataset,  Up: Invoking astmkprof

8.1.4.4 MakeProfiles log file
.............................

Besides the final merged dataset of all the profiles, or the individual
datasets (see *note MakeProfiles output dataset::), if the ‘--log’
option is called MakeProfiles will also create a log file in the current
directory (where you run MockProfiles).  See *note Common options:: for
a full description of ‘--log’ and other options that are shared between
all Gnuastro programs.  The values for each column are explained in the
first few commented lines of the log file (starting with ‘#’ character).
Here is a more complete description.

   • An ID (row number of profile in input catalog).

   • The total magnitude of the profile in the output dataset.  When the
     profile does not completely overlap with the output dataset, this
     will be different from your input magnitude.

   • The number of pixels (in the oversampled image) which used Monte
     Carlo integration and not the central pixel value, see *note
     Sampling from a function::.

   • The fraction of flux in the Monte Carlo integrated pixels.

   • If an individual image was created, this column will have a value
     of ‘1’, otherwise it will have a value of ‘0’.


File: gnuastro.info,  Node: High-level calculations,  Next: Installed scripts,  Prev: Data modeling,  Up: Top

9 High-level calculations
*************************

After the reduction of raw data (for example, with the programs in *note
Data manipulation::) you will have reduced images/data ready for
processing/analyzing (for example, with the programs in *note Data
analysis::).  But the processed/analyzed data (or catalogs) are still
not enough to derive any scientific result.  Even higher-level analysis
is still needed to convert the observed magnitudes, sizes or volumes
into physical quantities that we associate with each catalog entry or
detected object which is the purpose of the tools in this section.

* Menu:

* CosmicCalculator::            Calculate cosmological variables


File: gnuastro.info,  Node: CosmicCalculator,  Prev: High-level calculations,  Up: High-level calculations

9.1 CosmicCalculator
====================

To derive higher-level information regarding our sources in
extra-galactic astronomy, cosmological calculations are necessary.  In
Gnuastro, CosmicCalculator is in charge of such calculations.  Before
discussing how CosmicCalculator is called and operates (in *note
Invoking astcosmiccal::), it is important to provide a rough but mostly
self sufficient review of the basics and the equations used in the
analysis.  In *note Distance on a 2D curved space:: the basic idea of
understanding distances in a curved and expanding 2D universe (which we
can visualize) are reviewed.  Having solidified the concepts there, in
*note Extending distance concepts to 3D::, the formalism is extended to
the 3D universe we are trying to study in our research.

   The focus here is obtaining a physical insight into these equations
(mainly for the use in real observational studies).  There are many
books thoroughly deriving and proving all the equations with all
possible initial conditions and assumptions for any abstract universe,
interested readers can study those books.

* Menu:

* Distance on a 2D curved space::  Distances in 2D for simplicity.
* Extending distance concepts to 3D::  Going to 3D (our real universe).
* Invoking astcosmiccal::       How to run CosmicCalculator.


File: gnuastro.info,  Node: Distance on a 2D curved space,  Next: Extending distance concepts to 3D,  Prev: CosmicCalculator,  Up: CosmicCalculator

9.1.1 Distance on a 2D curved space
-----------------------------------

The observations to date (for example, the Planck 2015 results), have
not measured(1) the presence of significant curvature in the universe.
However to be generic (and allow its measurement if it does in fact
exist), it is very important to create a framework that allows non-zero
uniform curvature.  However, this section is not intended to be a fully
thorough and mathematically complete derivation of these concepts.
There are many references available for such reviews that go deep into
the abstract mathematical proofs.  The emphasis here is on visualization
of the concepts for a beginner.

   As 3D beings, it is difficult for us to mentally create (visualize) a
picture of the curvature of a 3D volume.  Hence, here we will assume a
2D surface/space and discuss distances on that 2D surface when it is
flat and when it is curved.  Once the concepts have been
created/visualized here, we will extend them, in *note Extending
distance concepts to 3D::, to a real 3D spatial _slice_ of the Universe
we live in and hope to study.

   To be more understandable (actively discuss from an observer's point
of view) let's assume there's an imaginary 2D creature living on the 2D
space (which _might_ be curved in 3D). Here, we will be working with
this creature in its efforts to analyze distances in its 2D universe.
The start of the analysis might seem too mundane, but since it is
difficult to imagine a 3D curved space, it is important to review all
the very basic concepts thoroughly for an easy transition to a universe
that is more difficult to visualize (a curved 3D space embedded in 4D).

   To start, let's assume a static (not expanding or shrinking), flat 2D
surface similar to *note Figure 9.1: flatplane. and that the 2D creature
is observing its universe from point $A$.  One of the most basic ways to
parameterize this space is through the Cartesian coordinates ($x$, $y$).
In *note Figure 9.1: flatplane, the basic axes of these two coordinates
are plotted.  An infinitesimal change in the direction of each axis is
written as $dx$ and $dy$.  For each point, the infinitesimal changes are
parallel with the respective axes and are not shown for clarity.
Another very useful way of parameterizing this space is through polar
coordinates.  For each point, we define a radius ($r$) and angle
($\phi$) from a fixed (but arbitrary) reference axis.  In *note Figure
9.1: flatplane. the infinitesimal changes for each polar coordinate are
plotted for a random point and a dashed circle is shown for all points
with the same radius.

[image src="gnuastro-figures/flatplane.png" text="../gnuastro-figures//flatplane.eps"]

Figure 9.1: Two dimensional Cartesian and polar coordinates on a flat
plane.

   Assuming an object is placed at a certain position, which can be
parameterized as $(x,y)$, or $(r,\phi)$, a general infinitesimal change
in its position will place it in the coordinates $(x+dx,y+dy)$, or
$(r+dr,\phi+d\phi)$.  The distance (on the flat 2D surface) that is
covered by this infinitesimal change in the static universe ($ds_s$, the
subscript signifies the static nature of this universe) can be written
as:

                 $$ds_s^2=dx^2+dy^2=dr^2+r^2d\phi^2$$

   The main question is this: how can the 2D creature incorporate the
(possible) curvature in its universe when it's calculating distances?
The universe that it lives in might equally be a curved surface like
*note Figure 9.2: sphereandplane.  The answer to this question but for a
3D being (us) is the whole purpose to this discussion.  Here, we want to
give the 2D creature (and later, ourselves) the tools to measure
distances if the space (that hosts the objects) is curved.

   *note Figure 9.2: sphereandplane. assumes a spherical shell with
radius $R$ as the curved 2D plane for simplicity.  The 2D plane is
tangent to the spherical shell and only touches it at $A$.  This idea
will be generalized later.  The first step in measuring the distance in
a curved space is to imagine a third dimension along the $z$ axis as
shown in *note Figure 9.2: sphereandplane.  For simplicity, the $z$ axis
is assumed to pass through the center of the spherical shell.  Our
imaginary 2D creature cannot visualize the third dimension or a curved
2D surface within it, so the remainder of this discussion is purely
abstract for it (similar to us having difficulty in visualizing a 3D
curved space in 4D). But since we are 3D creatures, we have the
advantage of visualizing the following steps.  Fortunately the 2D
creature is already familiar with our mathematical constructs, so it can
follow our reasoning.

   With the third axis added, a generic infinitesimal change over _the
full_ 3D space corresponds to the distance:

            $$ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.$$

[image src="gnuastro-figures/sphereandplane.png" text="../gnuastro-figures//sphereandplane.eps"]

Figure 9.2: 2D spherical shell (centered on $O$) and flat plane (light
gray) tangent to it at point $A$.

   It is very important to recognize that this change of distance is for
_any_ point in the 3D space, not just those changes that occur on the 2D
spherical shell of *note Figure 9.2: sphereandplane.  Recall that our 2D
friend can only do measurements on the 2D surfaces, not the full 3D
space.  So we have to constrain this general change to any change on the
2D spherical shell.  To do that, let's look at the arbitrary point $P$
on the 2D spherical shell.  Its image ($P'$) on the flat plain is also
displayed.  From the dark gray triangle, we see that

$$\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.$$These relations allow the 2D creature to find the value of $z$ (an abstract dimension for it) as a function of r (distance on a flat 2D plane, which it can visualize) and thus eliminate $z$.
   From $\sin^2\theta+\cos^2\theta=1$, we get $z^2-2Rz+r^2=0$ and
solving for $z$, we find:

           $$z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).$$

   The $\pm$ can be understood from *note Figure 9.2: sphereandplane.:
For each $r$, there are two points on the sphere, one in the upper
hemisphere and one in the lower hemisphere.  An infinitesimal change in
$r$, will create the following infinitesimal change in $z$:

                    $$dz={\mp r\over R}\left(1\over
\sqrt{1-{r^2/R^2}}\right)dr.$$Using the positive signed equation instead of $dz$ in the $ds_s^2$ equation above, we get:

             $$ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.$$

   The derivation above was done for a spherical shell of radius $R$ as
a curved 2D surface.  To generalize it to any surface, we can define
$K=1/R^2$ as the curvature parameter.  Then the general infinitesimal
change in a static universe can be written as:

               $$ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.$$

   Therefore, when $K>0$ (and curvature is the same everywhere), we have
a finite universe, where $r$ cannot become larger than $R$ as in *note
Figure 9.2: sphereandplane.  When $K=0$, we have a flat plane (*note
Figure 9.1: flatplane.) and a negative $K$ will correspond to an
imaginary $R$.  The latter two cases may be infinite in area (which is
not a simple concept, but mathematically can be modeled with $r$
extending infinitely), or finite-area (like a cylinder is flat
everywhere with $ds_s^2={dx^2 + dy^2}$, but finite in one direction in
size).

   A very important issue that can be discussed now (while we are still
in 2D and can actually visualize things) is that $\overrightarrow{r}$ is
tangent to the curved space at the observer's position.  In other words,
it is on the gray flat surface of *note Figure 9.2: sphereandplane, even
when the universe if curved: $\overrightarrow{r}=P'-A$.  Therefore for
the point $P$ on a curved space, the raw coordinate $r$ is the distance
to $P'$, not $P$.  The distance to the point $P$ (at a specific
coordinate $r$ on the flat plane) over the curved surface (thick line in
*note Figure 9.2: sphereandplane.) is called the _proper distance_ and
is displayed with $l$.  For the specific example of *note Figure 9.2:
sphereandplane, the proper distance can be calculated with: $l=R\theta$
($\theta$ is in radians).  Using the $\sin\theta$ relation found above,
we can find $l$ as a function of $r$:

    $$\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
               l(r)=R\sin^{-1}\left({r\over R}\right)$$

   $R$ is just an arbitrary constant and can be directly found from $K$,
so for cleaner equations, it is common practice to set $R=1$, which
gives: $l(r)=\sin^{-1}r$.  Also note that when $R=1$, then $l=\theta$.
Generally, depending on the curvature, in a _static_ universe the proper
distance can be written as a function of the coordinate $r$ as (from now
on we are assuming $R=1$):

               $$l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
    l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).$$With
   $l$, the infinitesimal change of distance can be written in a more
simpler and abstract form of

                      $$ds_s^2=dl^2+r^2d\phi^2.$$

   Until now, we had assumed a static universe (not changing with time).
But our observations so far appear to indicate that the universe is
expanding (it is not static).  Since there is no reason to expect the
observed expansion is unique to our particular position of the universe,
we expect the universe to be expanding at all points with the same rate
at the same time.  Therefore, to add a time dependence to our distance
measurements, we can include a multiplicative scaling factor, which is a
function of time: $a(t)$.  The functional form of $a(t)$ comes from the
cosmology, the physics we assume for it: general relativity, and the
choice of whether the universe is uniform ('homogeneous') in density and
curvature or inhomogeneous.  In this section, the functional form of
$a(t)$ is irrelevant, so we can avoid these issues.

   With this scaling factor, the proper distance will also depend on
time.  As the universe expands, the distance between two given points
will shift to larger values.  We thus define a distance measure, or
coordinate, that is independent of time and thus does not 'move'.  We
call it the _comoving distance_ and display with $\chi$ such that:
$l(r,t)=\chi(r)a(t)$.  We have therefore, shifted the $r$ dependence of
the proper distance we derived above for a static universe to the
comoving distance:

              $$\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
   \chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).$$

   Therefore, $\chi(r)$ is the proper distance to an object at a
specific reference time: $t=t_r$ (the $r$ subscript signifies
"reference") when $a(t_r)=1$.  At any arbitrary moment ($t\neq{t_r}$)
before or after $t_r$, the proper distance to the object can be scaled
with $a(t)$.

   Measuring the change of distance in a time-dependent (expanding)
universe only makes sense if we can add up space and time(2).  But we
can only add bits of space and time together if we measure them in the
same units: with a conversion constant (similar to how 1000 is used to
convert a kilometer into meters).  Experimentally, we find strong
support for the hypothesis that this conversion constant is the speed of
light (or gravitational waves(3)) in a vacuum.  This speed is postulated
to be constant(4) and is almost always written as $c$.  We can thus
parameterize the change in distance on an expanding 2D surface as

  $$ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).$$

   ---------- Footnotes ----------

   (1) The observations are interpreted under the assumption of uniform
curvature.  For a relativistic alternative to dark energy (and maybe
also some part of dark matter), non-uniform curvature may be even be
more critical, but that is beyond the scope of this brief explanation.

   (2) In other words, making our space-time consistent with Minkowski
space-time geometry.  In this geometry, different observers at a given
point (event) in space-time split up space-time into 'space' and 'time'
in different ways, just like people at the same spatial position can
make different choices of splitting up a map into 'left-right' and
'up-down'.  This model is well supported by twentieth and twenty-first
century observations.

   (3) The speed of gravitational waves was recently found to be very
similar to that of light in vacuum, see LIGO Collaboration 2017
(https://arxiv.org/abs/1710.05834).

   (4) In _natural units_, speed is measured in units of the speed of
light in vacuum.


File: gnuastro.info,  Node: Extending distance concepts to 3D,  Next: Invoking astcosmiccal,  Prev: Distance on a 2D curved space,  Up: CosmicCalculator

9.1.2 Extending distance concepts to 3D
---------------------------------------

The concepts of *note Distance on a 2D curved space:: are here extended
to a 3D space that _might_ be curved.  We can start with the generic
infinitesimal distance in a static 3D universe, but this time in
spherical coordinates instead of polar coordinates.  $\theta$ is shown
in *note Figure 9.2: sphereandplane, but here we are 3D beings,
positioned on $O$ (the center of the sphere) and the point $O$ is
tangent to a 4D-sphere.  In our 3D space, a generic infinitesimal
displacement will correspond to the following distance in spherical
coordinates:

 $$ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).$$

   Like the 2D creature before, we now have to assume an abstract
dimension which we cannot visualize easily.  Let's call the fourth
dimension $w$, then the general change in coordinates in the _full_ four
dimensional space will be:

      $$ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.$$

But we can only work on a 3D curved space, so following exactly the same
steps and conventions as our 2D friend, we arrive at:

  $$ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).$$

In a non-static universe (with a scale factor a(t)), the distance can be
written as:

$$ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].$$


File: gnuastro.info,  Node: Invoking astcosmiccal,  Prev: Extending distance concepts to 3D,  Up: CosmicCalculator

9.1.3 Invoking CosmicCalculator
-------------------------------

CosmicCalculator will calculate cosmological variables based on the
input parameters.  The executable name is ‘astcosmiccal’ with the
following general template

     $ astcosmiccal [OPTION...] ...

One line examples:

     ## Print basic cosmological properties at redshift 2.5:
     $ astcosmiccal -z2.5

     ## Only print Comoving volume over 4pi stradian to z (Mpc^3):
     $ astcosmiccal --redshift=0.8 --volume

     ## Print redshift and age of universe when Lyman-alpha line is
     ## at 6000 angstrom (another way to specify redshift).
     $ astcosmiccal --obsline=Ly-alpha,6000 --age

     ## Print luminosity distance, angular diameter distance and age
     ## of universe in one row at redshift 0.4
     $ astcosmiccal -z0.4 -LAg

     ## Assume Lambda and matter density of 0.7 and 0.3 and print
     ## basic cosmological parameters for redshift 2.1:
     $ astcosmiccal -l0.7 -m0.3 -z2.1

     ## Print wavelength of all pre-defined spectral lines when
     ## Lyman-alpha is observed at 4000 Angstroms.
     $ astcosmiccal --obsline=Ly-alpha,4000 --listlinesatz

   The input parameters (current matter density, etc.)  can be given as
command-line options or in the configuration files, see *note
Configuration files::.  For a definition of the different parameters,
please see the sections prior to this.  If no redshift is given,
CosmicCalculator will just print its input parameters and abort.  For a
full list of the input options, please see *note CosmicCalculator input
options::.

   Without any particular output requested (and only a given redshift),
CosmicCalculator will print all basic cosmological calculations (one per
line) with some explanations before each.  This can be good when you
want a general feeling of the conditions at a specific redshift.
Alternatively, if any specific calculation(s) are requested (its
possible to call more than one), only the requested value(s) will be
calculated and printed with one character space between them.  In this
case, no description or units will be printed.  See *note
CosmicCalculator basic cosmology calculations:: for the full list of
these options along with some explanations how when/how they can be
useful.

   Another common operation in observational cosmology is dealing with
spectral lines at different redshifts.  CosmicCalculator also has
features to help in such situations, please see *note CosmicCalculator
spectral line calculations::.

* Menu:

* CosmicCalculator input options::  Options to specify input conditions.
* CosmicCalculator basic cosmology calculations::  Such as distance modulus and distances.
* CosmicCalculator spectral line calculations::  How they get affected by redshift.


File: gnuastro.info,  Node: CosmicCalculator input options,  Next: CosmicCalculator basic cosmology calculations,  Prev: Invoking astcosmiccal,  Up: Invoking astcosmiccal

9.1.3.1 CosmicCalculator input options
......................................

The inputs to CosmicCalculator can be specified with the following
options:

‘-z FLT’
‘--redshift=FLT’
     The redshift of interest.  There are two other ways that you can
     specify the target redshift: 1) Spectral lines and their observed
     wavelengths, see ‘--obsline’.  2) Velocity, see ‘--velocity’.
     Hence this option cannot be called with ‘--obsline’ or
     ‘--velocity’.

‘-y FLT’
‘--velocity=FLT’
     Input velocity in km/s.  The given value will be converted to
     redshift internally, and used in any subsequent calculation.  This
     option is thus an alternative to ‘--redshift’ or ‘--obsline’, it
     cannot be used with them.  The conversion will be done with the
     more general and accurate relativistic equation of
     $1+z=\sqrt{(c+v)/(c-v)}$, not the simplified $z\approx v/c$.

‘-H FLT’
‘--H0=FLT’
     Current expansion rate (in km sec$^{-1}$ Mpc$^{-1}$).

‘-l FLT’
‘--olambda=FLT’
     Cosmological constant density divided by the critical density in
     the current Universe ($\Omega_{\Lambda,0}$).

‘-m FLT’
‘--omatter=FLT’
     Matter (including massive neutrinos) density divided by the
     critical density in the current Universe ($\Omega_{m,0}$).

‘-r FLT’
‘--oradiation=FLT’
     Radiation density divided by the critical density in the current
     Universe ($\Omega_{r,0}$).

‘-O STR/FLT,FLT’
‘--obsline=STR/FLT,FLT’
     Find the redshift to use in next steps based on the rest-frame and
     observed wavelengths of a line.  This option is thus an alternative
     to ‘--redshift’ or ‘--velocity’, it cannot be used with them.

     The first argument identifies the line.  It can be one of the
     standard names, or any rest-frame wavelength in Angstroms.  The
     second argument is the observed wavelength of that line.  For
     example, ‘--obsline=Ly-alpha,6000’ is the same as
     ‘--obsline=1215.64,6000’.  Wavelengths are assumed to be in
     Angstroms by default (other units can be selected with
     ‘--lineunit’, see *note CosmicCalculator spectral line
     calculations::).

     The list of pre-defined names for the lines in Gnuastro's database
     is available by running

          $ astcosmiccal --listlines


File: gnuastro.info,  Node: CosmicCalculator basic cosmology calculations,  Next: CosmicCalculator spectral line calculations,  Prev: CosmicCalculator input options,  Up: Invoking astcosmiccal

9.1.3.2 CosmicCalculator basic cosmology calculations
.....................................................

By default, when no specific calculations are requested,
CosmicCalculator will print a complete set of all its calculators (one
line for each calculation, see *note Invoking astcosmiccal::).  The full
list of calculations can be useful when you do not want any specific
value, but just a general view.  In other contexts (for example, in a
batch script or during a discussion), you know exactly what you want and
do not want to be distracted by all the extra information.

   You can use any number of the options described below in any order.
When any of these options are requested, CosmicCalculator's output will
just be a single line with a single space between the (possibly)
multiple values.  In the example below, only the tangential distance
along one arc-second (in kpc), absolute magnitude conversion, and age of
the universe at redshift 2 are printed (recall that you can merge short
options together, see *note Options::).

     $ astcosmiccal -z2 -sag
     8.585046 44.819248 3.289979

   Here is one example of using this feature in scripts: by adding the
following two lines in a script to keep/use the comoving volume with
varying redshifts:

     z=3.12
     vol=$(astcosmiccal --redshift=$z --volume)

In a script, this operation might be necessary for a large number of
objects (several of galaxies in a catalog for example).  So the fact
that all the other default calculations are ignored will also help you
get to your result faster.

   If you are indeed dealing with many (for example, thousands) of
redshifts, using CosmicCalculator is not the best/fastest solution.
Because it has to go through all the configuration files and
preparations for each invocation.  To get the best efficiency (least
overhead), we recommend using Gnuastro's cosmology library (see *note
Cosmology library::).  CosmicCalculator also calls the library functions
defined there for its calculations, so you get the same result with no
overhead.  Gnuastro also has libraries for easily reading tables into a
C program, see *note Table input output::.  Afterwards, you can easily
build and run your C program for the particular processing with *note
BuildProgram::.

   If you just want to inspect the value of a variable visually, the
description (which comes with units) might be more useful.  In such
cases, the following command might be better.  The other calculations
will also be done, but they are so fast that you will not notice on
modern computers (the time it takes your eye to focus on the result is
usually longer than the processing: a fraction of a second).

     $ astcosmiccal --redshift=0.832 | grep volume

   The full list of CosmicCalculator's specific calculations is present
below in two groups: basic cosmology calculations and those related to
spectral lines.  In case you have forgot the units, you can use the
‘--help’ option which has the units along with a short description.

‘-e’
‘--usedredshift’
     The redshift that was used in this run.  In many cases this is the
     main input parameter to CosmicCalculator, but it is useful in
     others.  For example, in combination with ‘--obsline’ (where you
     give an observed and rest-frame wavelength and would like to know
     the redshift) or with ‘--velocity’ (where you specify the velocity
     instead of redshift).  Another example is when you run
     CosmicCalculator in a loop, while changing the redshift and you
     want to keep the redshift value with the resulting calculation.

‘-Y’
‘--usedvelocity’
     The velocity (in km/s) that was used in this run.  The conversion
     from redshift will be done with the more general and accurate
     relativistic equation of $1+z=\sqrt{(c+v)/(c-v)}$, not the
     simplified $z\approx v/c$.

‘-G’
‘--agenow’
     The current age of the universe (given the input parameters) in Ga
     (Giga annum, or billion years).

‘-C’
‘--criticaldensitynow’
     The current critical density (given the input parameters) in grams
     per centimeter-cube ($g/cm^3$).

‘-d’
‘--properdistance’
     The proper distance (at current time) to object at the given
     redshift in Megaparsecs (Mpc).  See *note Distance on a 2D curved
     space:: for a description of the proper distance.

‘-A’
‘--angulardiamdist’
     The angular diameter distance to object at given redshift in
     Megaparsecs (Mpc).

‘-s’
‘--arcsectandist’
     The tangential distance covered by 1 arc-seconds at the given
     redshift in kiloparsecs (Kpc).  This can be useful when trying to
     estimate the resolution or pixel scale of an instrument (usually in
     units of arc-seconds) at a given redshift.

‘-L’
‘--luminositydist’
     The luminosity distance to object at given redshift in Megaparsecs
     (Mpc).

‘-u’
‘--distancemodulus’
     The distance modulus at given redshift.

‘-a’
‘--absmagconv’
     The conversion factor (addition) to absolute magnitude.  Note that
     this is practically the distance modulus added with
     $-2.5\log{(1+z)}$ for the desired redshift based on the input
     parameters.  Once the apparent magnitude and redshift of an object
     is known, this value may be added with the apparent magnitude to
     give the object's absolute magnitude.

‘-g’
‘--age’
     Age of the universe at given redshift in Ga (Giga annum, or billion
     years).

‘-b’
‘--lookbacktime’
     The look-back time to given redshift in Ga (Giga annum, or billion
     years).  The look-back time at a given redshift is defined as the
     current age of the universe (‘--agenow’) subtracted by the age of
     the universe at the given redshift.

‘-c’
‘--criticaldensity’
     The critical density at given redshift in grams per centimeter-cube
     ($g/cm^3$).

‘-v’
‘--onlyvolume’
     The comoving volume in Megaparsecs cube (Mpc$^3$) until the desired
     redshift based on the input parameters.


File: gnuastro.info,  Node: CosmicCalculator spectral line calculations,  Prev: CosmicCalculator basic cosmology calculations,  Up: Invoking astcosmiccal

9.1.3.3 CosmicCalculator spectral line calculations
...................................................

At different redshifts, observed spectral lines are shifted compared to
their rest frame wavelengths with this simple relation:
$\lambda_{obs}=\lambda_{rest}(1+z)$.  Although this relation is very
simple and can be done for one line in the head (or a simple
calculator!), it slowly becomes tiring when dealing with a lot of lines
or redshifts, or some precision is necessary.  The options in this
section are thus provided to greatly simplify usage of this simple
equation, and also helping by storing a list of pre-defined spectral
line wavelengths.

   For example, if you want to know the wavelength of the $H\alpha$ line
(at 6562.8 Angstroms in rest frame), when $Ly\alpha$ is at 8000
Angstroms, you can call CosmicCalculator like the first example below.
And if you want the wavelength of all pre-defined spectral lines at this
redshift, you can use the second command.

     $ astcosmiccal --obsline=lyalpha,8000 --lineatz=halpha
     $ astcosmiccal --obsline=lyalpha,8000 --listlinesatz

   Bellow you can see the printed/output calculations of
CosmicCalculator that are related to spectral lines.  Note that
‘--obsline’ is an input parameter, so it is discussed (with the full
list of known lines) in *note CosmicCalculator input options::.

‘--listlines’
     List the pre-defined rest frame spectral line wavelengths and their
     names on standard output, then abort CosmicCalculator.  The units
     of the displayed wavelengths for each line can be determined with
     ‘--lineunit’ (see below).

     When this option is given, other operations on the command-line
     will be ignored.  This is convenient when you forget the specific
     name of the spectral line used within Gnuastro, or when you forget
     the exact wavelength of a certain line.

     These names can be used with the options that deal with spectral
     lines, for example, ‘--obsline’ and ‘--lineatz’ (*note
     CosmicCalculator basic cosmology calculations::).

     The format of the output list is a two-column table, with
     Gnuastro's text table format (see *note Gnuastro text table
     format::).  Therefore, if you are only looking for lines in a
     specific range, you can pipe the output into Gnuastro's table
     program and use its ‘--range’ option on the ‘wavelength’ (first)
     column.  For example, if you only want to see the lines between
     4000 and 6000 Angstroms, you can run this command:

          $ astcosmiccal --listlines \
                         | asttable --range=wavelength,4000,6000

     And if you want to use the list later and have it as a table in a
     file, you can easily add the ‘--output’ (or ‘-o’) option to the
     ‘asttable’ command, and specify the filename, for example,
     ‘--output=lines.fits’ or ‘--output=lines.txt’.

‘--listlinesatz’
     Similar to ‘--listlines’ (above), but the printed wavelength is not
     in the rest frame, but redshifted to the given redshift.  Recall
     that the redshift can be specified by ‘--redshift’ directly or by
     ‘--obsline’, see *note CosmicCalculator input options::.  For an
     example usage of this option, see *note Viewing spectra and
     redshifted lines::.

‘-i STR/FLT’
‘--lineatz=STR/FLT’
     The wavelength of the specified line at the redshift given to
     CosmicCalculator.  The line can be specified either by its name or
     directly as a number (its wavelength).  The units of the displayed
     wavelengths for each line can be determined with ‘--lineunit’ (see
     below).

     To get the list of pre-defined names for the lines and their
     wavelength, you can use the ‘--listlines’ option, see *note
     CosmicCalculator input options::.  In the former case (when a name
     is given), the returned number is in units of Angstroms.  In the
     latter (when a number is given), the returned value is the same
     units of the input number (assuming it is a wavelength).

‘--lineunit=STR’
     The units to display line wavelengths above.  It can take the
     following four values.  If you need any other unit, please contact
     us at ‘bug-gnuastro@gnu.org’.

     ‘m’
          Meter.
     ‘micro-m’
          Micrometer or $10^{-6}m$.
     ‘nano-m’
          Nanometer, or $10^{-9}m$.
     ‘angstrom’
          Angstrom or $10^{-10}m$; the default unit when this option is
          not called.


File: gnuastro.info,  Node: Installed scripts,  Next: Makefile extensions,  Prev: High-level calculations,  Up: Top

10 Installed scripts
********************

Gnuastro's programs (introduced in previous chapters) are designed to be
highly modular and thus contain lower-level operations on the data.
However, in many contexts, certain higher-level are also shared between
many contexts.  For example, a sequence of calls to multiple Gnuastro
programs, or a special way of running a program and treating the output.
To facilitate such higher-level data analysis, Gnuastro also installs
some scripts on your system with the (‘astscript-’) prefix (in contrast
to the other programs that only have the ‘ast’ prefix).

   Like all of Gnuastro's source code, these scripts are also heavily
commented.  They are written in portable shell scripts (command-line
environments), which does not need compilation.  Therefore, if you open
the installed scripts in a text editor, you can actually read them(1).
For example, with this command (just replace ‘nano’ with your favorite
text editor, like ‘emacs’ or ‘vim’):

     $ nano $(which astscript-NAME)

   Shell scripting is the same language that you use when typing on the
command-line.  Therefore shell scripting is much more widely known and
used compared to C (the language of other Gnuastro programs).  Because
Gnuastro's installed scripts do higher-level operations, customizing
these scripts for a special project will be more common than the
programs.

   These scripts also accept options and are in many ways similar to the
programs (see *note Common options::) with some minor differences:

   • Currently they do not accept configuration files themselves.
     However, the configuration files of the Gnuastro programs they call
     are indeed parsed and used by those programs.

     As a result, they do not have the following options:
     ‘--checkconfig’, ‘--config’, ‘--lastconfig’, ‘--onlyversion’,
     ‘--printparams’, ‘--setdirconf’ and ‘--setusrconf’.

   • They do not directly allocate any memory, so there is no
     ‘--minmapsize’.

   • They do not have an independent ‘--usage’ option: when called with
     ‘--usage’, they just recommend running ‘--help’.

   • The output of ‘--help’ is not configurable like the programs (see
     *note --help::).

   • The scripts will commonly use your installed shell and other basic
     command-line tools (for example, AWK or SED). Different systems
     have different versions and implementations of these basic tools
     (for example, GNU/Linux systems use GNU Bash, GNU AWK and GNU SED
     which are far more advanced and up to date then the minimalist AWK
     and SED of most other systems).  Therefore, unexpected errors in
     these tools might come up when you run these scripts on
     non-GNU/Linux operating systems.  If you do confront such strange
     errors, please submit a bug report so we fix it as soon as possible
     (see *note Report a bug::).

* Menu:

* Sort FITS files by night::    Sort many files by date.
* Generate radial profile::     Radial profile of an object in an image.
* SAO DS9 region files from table::  Create ds9 region file from a table.
* Viewing FITS file contents with DS9 or TOPCAT::  Open DS9 (images/cubes) or TOPCAT (tables).
* Zero point estimation::       Zero point of an image from reference catalog or image(s).
* Pointing pattern simulation::  Simulate a stack with a given series of pointings.
* Color images with gray faint regions::  Color for bright pixels and grayscale for faint.
* PSF construction and subtraction::  Set of scripts to create extended PSF of an image.

   ---------- Footnotes ----------

   (1) Gnuastro's installed programs (those only starting with ‘ast’)
are not human-readable.  They are written in C and need to be compiled
before execution.  Compilation optimizes the steps into the low-level
hardware CPU instructions/language to improve efficiency.  Because
compiled programs do not need an interpreter like Bash on every run,
they are much faster and more independent than scripts.  To read the
source code of the programs, look into the ‘bin/progname’ directory of
Gnuastro's source (*note Downloading the source::).  If you would like
to read more about why C was chosen for the programs, please see *note
Why C::.


File: gnuastro.info,  Node: Sort FITS files by night,  Next: Generate radial profile,  Prev: Installed scripts,  Up: Installed scripts

10.1 Sort FITS files by night
=============================

FITS images usually contain (several) keywords for preserving important
dates.  In particular, for lower-level data, this is usually the
observation date and time (for example, stored in the ‘DATE-OBS’ keyword
value).  When analyzing observed datasets, many calibration steps (like
the dark, bias or flat-field), are commonly calculated on a
per-observing-night basis.

   However, the FITS standard's date format (‘YYYY-MM-DDThh:mm:ss.ddd’)
is based on the western (Gregorian) calendar.  Dates that are stored in
this format are complicated for automatic processing: a night starts in
the final hours of one calendar day, and extends to the early hours of
the next calendar day.  As a result, to identify datasets from one
night, we commonly need to search for two dates.  However calendar
peculiarities can make this identification very difficult.  For example,
when an observation is done on the night separating two months (like the
night starting on March 31st and going into April 1st), or two years
(like the night starting on December 31st 2018 and going into January
1st, 2019).  To account for such situations, it is necessary to keep
track of how many days are in a month, and leap years, etc.

   Gnuastro's ‘astscript-sort-by-night’ script is created to help in
such important scenarios.  It uses *note Fits:: to convert the FITS date
format into the Unix epoch time (number of seconds since 00:00:00 of
January 1st, 1970), using the ‘--datetosec’ option.  The Unix epoch time
is a single number (integer, if not given in sub-second precision),
enabling easy comparison and sorting of dates after January 1st, 1970.

   You can use this script as a basis for making a much more highly
customized sorting script.  Here are some examples

   • If you need to copy the files, but only need a single extension
     (not the whole file), you can add a step just before the making of
     the symbolic links, or copies, and change it to only copy a certain
     extension of the FITS file using the Fits program's ‘--copy’
     option, see *note HDU information and manipulation::.

   • If you need to classify the files with finer detail (for example,
     the purpose of the dataset), you can add a step just before the
     making of the symbolic links, or copies, to specify a file-name
     prefix based on other certain keyword values in the files.  For
     example, when the FITS files have a keyword to specify if the
     dataset is a science, bias, or flat-field image.  You can read it
     and to add a ‘sci-’, ‘bias-’, or ‘flat-’ to the created file (after
     the ‘--prefix’) automatically.

     For example, let's assume the observing mode is stored in the
     hypothetical ‘MODE’ keyword, which can have three values of
     ‘BIAS-IMAGE’, ‘SCIENCE-IMAGE’ and ‘FLAT-EXP’.  With the step below,
     you can generate a mode-prefix, and add it to the generated
     link/copy names (just correct the filename and extension of the
     first line to the script's variables):

          modepref=$(astfits infile.fits -h1 \
                             | sed -e"s/'/ /g" \
                             | awk '$1=="MODE"{ \
                                 if($3=="BIAS-IMAGE") print "bias-"; \
                                 else if($3=="SCIENCE-IMAGE") print "sci-"; \
                                 else if($3==FLAT-EXP) print "flat-"; \
                                 else print $3, "NOT recognized"; exit 1}')

     Here is a description of it.  We first use ‘astfits’ to print all
     the keywords in extension ‘1’ of ‘infile.fits’.  In the FITS
     standard, string values (that we are assuming here) are placed in
     single quotes (<'>) which are annoying in this context/use-case.
     Therefore, we pipe the output of ‘astfits’ into ‘sed’ to remove all
     such quotes (substituting them with a blank space).  The result is
     then piped to AWK for giving us the final mode-prefix: with
     ‘$1=="MODE"’, we ask AWK to only consider the line where the first
     column is ‘MODE’.  There is an equal sign between the key name and
     value, so the value is the third column (‘$3’ in AWK). We thus use
     a simple ‘if-else’ structure to look into this value and print our
     custom prefix based on it.  The output of AWK is then stored in the
     ‘modepref’ shell variable which you can add to the link/copy name.

     With the solution above, the increment of the file counter for each
     night will be independent of the mode.  If you want the counter to
     be mode-dependent, you can add a different counter for each mode
     and use that counter instead of the generic counter for each night
     (based on the value of ‘modepref’).  But we will leave the
     implementation of this step to you as an exercise.

* Menu:

* Invoking astscript-sort-by-night::  Inputs and outputs to this script.


File: gnuastro.info,  Node: Invoking astscript-sort-by-night,  Prev: Sort FITS files by night,  Up: Sort FITS files by night

10.1.1 Invoking astscript-sort-by-night
---------------------------------------

This installed script will read a FITS date formatted value from the
given keyword, and classify the input FITS files into individual nights.
For more on installed scripts please see (see *note Installed
scripts::).  This script can be used with the following general
template:

     $ astscript-sort-by-night [OPTION...] FITS-files

One line examples:

     ## Use the DATE-OBS keyword
     $ astscript-sort-by-night --key=DATE-OBS /path/to/data/*.fits

     ## Make links to the input files with the `img-' prefix
     $ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits

   This script will look into a HDU/extension (‘--hdu’) for a keyword
(‘--key’) in the given FITS files and interpret the value as a date.
The inputs will be separated by "night"s (11:00a.m to next day's
10:59:59a.m, spanning two calendar days, exact hour can be set with
‘--hour’).

   The default output is a list of all the input files along with the
following two columns: night number and file number in that night
(sorted by time).  With ‘--link’ a symbolic link will be made (one for
each input) that contains the night number, and number of file in that
night (sorted by time), see the description of ‘--link’ for more.  When
‘--copy’ is used instead of a link, a copy of the inputs will be made
instead of symbolic link.

   Below you can see one example where all the ‘target-*.fits’ files in
the ‘data’ directory should be separated by observing night according to
the ‘DATE-OBS’ keyword value in their second extension (number ‘1’,
recall that HDU counting starts from 0).  You can see the output after
the ‘ls’ command.

     $ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
     $ ls
     img-n1-1.fits img-n1-2.fits img-n2-1.fits ...

   The outputs can be placed in a different (already existing) directory
by including that directory's name in the ‘--prefix’ value, for example,
‘--prefix=sorted/img-’ will put them all under the ‘sorted’ directory.

   This script can be configured like all Gnuastro's programs (through
command-line options, see *note Common options::), with some minor
differences that are described in *note Installed scripts::.  The
particular options to this script are listed below:

‘-h STR’
‘--hdu=STR’
     The HDU/extension to use in all the given FITS files.  All of the
     given FITS files must have this extension.

‘-k STR’
‘--key=STR’
     The keyword name that contains the FITS date format to
     classify/sort by.

‘-H FLT’
‘--hour=FLT’
     The hour that defines the next "night".  By default, all times
     before 11:00a.m are considered to belong to the previous calendar
     night.  If a sub-hour value is necessary, it should be given in
     units of hours, for example, ‘--hour=9.5’ corresponds to 9:30a.m.

     *Dealing with time zones:* The time that is recorded in ‘--key’ may
     be in UTC (Universal Time Coordinate).  However, the organization
     of the images taken during the night depends on the local time.  It
     is possible to take this into account by setting the ‘--hour’
     option to the local time in UTC.

     For example, consider a set of images taken in Auckland (New
     Zealand, UTC+12) during different nights.  If you want to classify
     these images by night, you have to know at which time (in UTC time)
     the Sun rises (or any other separator/definition of a different
     night).  For example, if your observing night finishes before
     9:00a.m in Auckland, you can use ‘--hour=21’.  Because in Auckland
     the local time of 9:00 corresponds to 21:00 UTC.

‘-l’
‘--link’
     Create a symbolic link for each input FITS file.  This option
     cannot be used with ‘--copy’.  The link will have a standard name
     in the following format (variable parts are written in ‘CAPITAL’
     letters and described after it):

          PnN-I.fits

     ‘P’
          This is the value given to ‘--prefix’.  By default, its value
          is ‘./’ (to store the links in the directory this script was
          run in).  See the description of ‘--prefix’ for more.
     ‘N’
          This is the night-counter: starting from 1.  ‘N’ is just
          incremented by 1 for the next night, no matter how many nights
          (without any dataset) there are between two subsequent
          observing nights (its just an identifier for each night which
          you can easily map to different calendar nights).
     ‘I’
          File counter in that night, sorted by time.

‘-c’
‘--copy’
     Make a copy of each input FITS file with the standard naming
     convention described in ‘--link’.  With this option, instead of
     making a link, a copy is made.  This option cannot be used with
     ‘--link’.

‘-p STR’
‘--prefix=STR’
     Prefix to append before the night-identifier of each newly created
     link or copy.  This option is thus only relevant with the ‘--copy’
     or ‘--link’ options.  See the description of ‘--link’ for how it is
     used.  For example, with ‘--prefix=img-’, all the created file
     names in the current directory will start with ‘img-’, making
     outputs like ‘img-n1-1.fits’ or ‘img-n3-42.fits’.

     ‘--prefix’ can also be used to store the links/copies in another
     directory relative to the directory this script is being run (it
     must already exist).  For example,
     ‘--prefix=/path/to/processing/img-’ will put all the links/copies
     in the ‘/path/to/processing’ directory, and the files (in that
     directory) will all start with ‘img-’.

‘--stdintimeout=INT’
     Number of micro-seconds to wait for standard input within this
     script.  This does not correspond to general inputs into the
     script, inputs to the script should always be given as a file.
     However, within the script, pipes are often used to pass the output
     of one program to another.  The value given to this option will be
     passed to those internal pipes.  When running this script, if you
     confront an error, saying "No input!", you should be able to fix it
     by giving a larger number to this option (the default value is
     10000000 micro-seconds or 10 seconds).


File: gnuastro.info,  Node: Generate radial profile,  Next: SAO DS9 region files from table,  Prev: Sort FITS files by night,  Up: Installed scripts

10.2 Generate radial profile
============================

The 1 dimensional radial profile of an object is an important parameter
in many aspects of astronomical image processing.  For example, you want
to study how the light of a galaxy is distributed as a function of the
radial distance from the center.  In other cases, the radial profile of
a star can show the PSF (see *note PSF::).  Gnuastro's
‘astscript-radial-profile’ script is created to obtain such radial
profiles for one object within an image.  This script uses *note
MakeProfiles:: to generate elliptical apertures with the values equal to
the distance from the center of the object and *note MakeCatalog:: for
measuring the values over the apertures.

* Menu:

* Invoking astscript-radial-profile::  How to call astscript-radial-profile


File: gnuastro.info,  Node: Invoking astscript-radial-profile,  Prev: Generate radial profile,  Up: Generate radial profile

10.2.1 Invoking astscript-radial-profile
----------------------------------------

This installed script will measure the radial profile of an object
within an image.  A general overview of this script has been published
in Infante-Sainz et al.  2024) (https://arxiv.org/abs/2401.05303);
please cite it if this script proves useful in your research.  For more
on installed scripts please see (see *note Installed scripts::).  This
script can be used with the following general template:

     $ astscript-radial-profile [OPTION...] FITS-file

Examples:

     ## Generate the radial profile with default options (assuming the
     ## object is in the center of the image, and using the mean).
     $ astscript-radial-profile image.fits

     ## Generate the radial profile centered at x=44 and y=37 (in pixels),
     ## up to  a radial distance of 19 pixels, use the mean value.
     $ astscript-radial-profile image.fits --center=44,37 --rmax=19

     ## Generate the radial profile centered at x=44 and y=37 (in pixels),
     ## up to a radial distance of 100 pixels, compute sigma clipped
     ## mean and standard deviation (sigclip-mean and sigclip-std) using
     ## 5 sigma and 0.1 tolerance (default is 3 sigma and 0.2 tolerance).
     $ astscript-radial-profile image.fits --center=44,37 --rmax=100 \
                                --sigmaclip=5,0.1 \
                                --measure=sigclip-mean,sigclip-std

     ## Generate the radial profile centered at RA=20.53751695,
     ## DEC=0.9454292263, up to a radial distance of 88 pixels,
     ## axis ratio equal to 0.32, and position angle of 148 deg.
     ## Name the output table as `radial-profile.fits'
     $ astscript-radial-profile image.fits --mode=wcs \
                                --center=20.53751695,0.9454292263 \
                                --rmax=88 --axis-ratio=0.32 \
                                --position-angle=148 -oradial-profile.fits

     ## Generate the radial profile centered at RA=40.062675270971,
     ## DEC=-8.1511992735126, up to a radial distance of 20 pixels,
     ## and calculate the SNR using the INPUT-NO-SKY and SKY-STD
     ## extensions of the NoiseChisel output file.
     $ astscript-radial-profile image_detected.fits -hINPUT-NO-SKY \
                                --mode=wcs --measure=sn \
                                --center=40.062675270971,-8.1511992735126 \
                                --rmax=20 --stdhdu=SKY_STD

     ## Generate the radial profile centered at RA=40.062675270971,
     ## DEC=-8.1511992735126, up to a radial distance of 20 pixels,
     ## and compute the SNR with a fixed value for std, std=10.
     $ astscript-radial-profile image.fits -h1 --mode=wcs --rmax=20 \
                                --center=40.062675270971,-8.1511992735126 \
                                --measure=sn --instd=10

     ## Generate the radial profile centered at X=1201, Y=1201 pixels, up
     ## to a radial distance of 20 pixels and compute the median and the
     ## SNR using the first extension of sky-std.fits as the dataset for std
     ## values.
     $ astscript-radial-profile image.fits -h1 --mode=img --rmax=20 \
                                --center=1201,1201 --measure=median,sn \
                                --instd=sky-std.fits

   This installed script will read a FITS image and will use it as the
basis for constructing the radial profile.  The output radial profile is
a table (FITS or plain-text) containing the radial distance from the
center in the first row and the specified measurements in the other
columns (mean, median, sigclip-mean, sigclip-median, etc.).

   To measure the radial profile, this script needs to generate
temporary files.  All these temporary files will be created within the
directory given to the ‘--tmpdir’ option.  When ‘--tmpdir’ is not
called, a temporary directory (with a name based on the inputs) will be
created in the running directory.  If the directory does not exist at
run-time, this script will create it.  After the output is created, this
script will delete the directory by default, unless you call the
‘--keeptmp’ option.

   With the default options, the script will generate a circular radial
profile using the mean value and centered at the center of the image.
In order to have more flexibility, several options are available to
configure for the desired radial profile.  In this sense, you can change
the center position, the maximum radius, the axis ratio and the position
angle (elliptical apertures are considered), the operator for obtaining
the profiles, and others (described below).

*Debug your profile:* to debug your results, especially close to the
center of your object, you can see the radial distance associated to
every pixel in your input.  To do this, use ‘--keeptmp’ to keep the
temporary files, and compare ‘crop.fits’ (crop of your input image
centered on your desired coordinate) with ‘apertures.fits’ (radial
distance of each pixel).

*Finding properties of your elliptical target: * you want to measure the
radial profile of a galaxy, but do not know its exact location, position
angle or axis ratio.  To obtain these values, you can use *note
NoiseChisel:: to detect signal in the image, feed it to *note Segment::
to do basic segmentation, then use *note MakeCatalog:: to measure the
center (‘--x’ and ‘--y’ in MakeCatalog), axis ratio (‘--axis-ratio’) and
position angle (‘--position-angle’).

*Masking other sources:* The image of an astronomical object will
usually have many other sources with your main target.  A crude solution
is to use sigma-clipped measurements for the profile.  However,
sigma-clipped measurements can easily be biased when the number of
sources at each radial distance increases at larger distances.
Therefore a robust solution is to mask all other detections within the
image.  You can use *note NoiseChisel:: and *note Segment:: to detect
and segment the sources, then set all pixels that do not belong to your
target to blank using *note Arithmetic:: (in particular, its ‘where’
operator).

‘-h STR’
‘--hdu=STR’
     The HDU/extension of the input image to use.

‘-o STR’
‘--output=STR’
     Filename of measured radial profile.  It can be either a FITS
     table, or plain-text table (determined from your given file name
     suffix).

‘-c FLT[,FLT[,...]]’
‘--center=FLT[,FLT[,...]]’
     The central position of the radial profile.  This option is used
     for placing the center of the profiles.  This parameter is used in
     *note Crop:: to center and crop the region.  The positions along
     each dimension must be separated by a comma (<,>) and fractions are
     also acceptable.  The number of values given to this option must be
     the same as the dimensions of the input dataset.  The units of the
     coordinates are read based on the value to the ‘--mode’ option, see
     below.

‘-O STR’
‘--mode=STR’
     Interpret the center position of the object (values given to
     ‘--center’) in image or WCS coordinates.  This option thus accepts
     only two values: ‘img’ or ‘wcs’.  By default, it is ‘--mode=img’.

‘-R FLT’
‘--rmax=FLT’
     Maximum radius for the radial profile (in pixels).  By default, the
     radial profile will be computed up to a radial distance equal to
     the maximum radius that fits into the image (assuming circular
     shape).

‘-P INT’
‘--precision=INT’
     The precision (number of digits after the decimal point) in
     resolving the radius.  The default value is ‘--precision=0’ (or
     ‘-P0’), and the value cannot be larger than ‘6’.  A higher
     precision is primarily useful when the very central few pixels are
     important for you.  A larger precision will over-resolve larger
     radial regions, causing scatter to significantly affect the
     measurements.

     For example, in the command below, we will generate the radial
     profile of an imaginary source (at RA,DEC of 1.23,4.567) and check
     the output without setting a precision:

          $ astscript-radial-profile image.fits --center=1.23,4.567 \
                      --mode=wcs --measure=mean,area --rmax=10 \
                      --output=radial.fits --quiet
          $ asttable radial.fits --head=10 -ffixed -p4
          0.0000        0.0139        1
          1.0000        0.0048        8
          2.0000        0.0023        16
          3.0000        0.0015        20
          4.0000        0.0011        24
          5.0000        0.0008        40
          6.0000        0.0006        36
          7.0000        0.0005        48
          8.0000        0.0004        56
          9.0000        0.0003        56

     Let's repeat the command above, but use a precision of 3 to resolve
     more finer details of the radial profile, while only printing the
     top 10 rows of the profile:

          $ astscript-radial-profile image.fits --center=1.23,4.567 \
                      --mode=wcs --measure=mean,area --rmax=10 \
                      --precision=3 --output=radial.fits --quiet
          $ asttable radial.fits --head=10 -ffixed -p4
          0.0000        0.0139        1
          1.0000        0.0056        4
          1.4140        0.0040        4
          2.0000        0.0027        4
          2.2360        0.0024        8
          2.8280        0.0018        4
          3.0000        0.0017        4
          3.1620        0.0016        8
          3.6050        0.0013        8
          4.0000        0.0011        4

     Do you see how many more radii have been added?  Between 1.0 and
     2.0, we now have one extra radius, between 2.0 to 3.0, we have two
     new radii and so on.  If you go to larger and larger radii, you
     will notice that they get resolved into many sub-components and the
     number of pixels used in each measurement will not be significant
     (you can already see that in the comparison above).  This has two
     problems: 1.  statistically, the scatter in larger radii (where the
     signal-to-noise ratio is usually low will make it hard to interpret
     the profile.  2.  technically, the output table will have many more
     rows!

     *Use higher precision only for small radii:* If you want to look at
     the whole profile (or the outer parts!), don't set the precision,
     the default mode is usually more than enough!  But when you are
     targeting the very central few pixels (usually less than a pixel
     radius of 5), use a higher precision.

‘-v INT’
‘--oversample=INT’
     Oversample the input dataset to the fraction given to this option.
     Therefore if you set ‘--rmax=20’ for example, and ‘--oversample=5’,
     your output will have 100 rows (without ‘--oversample’ it will only
     have 20 rows).  Unless the object is heavily undersampled (the
     pixels are larger than the actual object), this method provides a
     much more accurate result and there are sufficient number of pixels
     to get the profile accurately.

     Due to the discrete nature of pixels, if you use this option to
     oversample your profile, set ‘--precision=0’.  Otherwise, your
     profile will become step-like (with several radii having a single
     value).

‘-u INT’
‘--undersample=INT’
     Undersample the input dataset by the number given to this option.
     This option is for considering larger apertures than the original
     pixel size (aperture size is equal to 1 pixel).  For example, if a
     radial profile computed by default has 100 different radii
     (apertures of 1 pixel width), by considering ‘--undersample=2’ the
     radial profile will be computed over apertures of 2 pixels, so the
     final radial profile will have 50 different radii.  This option is
     good to measure over a larger number of pixels to improve the
     measurement.

‘-Q FLT’
‘--axis-ratio=FLT’
     The axis ratio of the apertures (minor axis divided by the major
     axis in a 2D ellipse).  By default (when this option is not given),
     the radial profile will be circular (axis ratio of 1).  This
     parameter is used as the option ‘--qcol’ in the generation of the
     apertures with ‘astmkprof’.

‘-p FLT’
‘--position-angle=FLT’
     The position angle (in degrees) of the profiles relative to the
     first FITS axis (horizontal when viewed in SAO DS9).  By default,
     it is ‘--position-angle=0’, which means that the semi-major axis of
     the profiles will be parallel to the first FITS axis.

‘-a FLT,FLT’
‘--azimuth=FLT,FLT’
     Limit the profile to the given azimuthal angle range (two numbers
     given to this option, in degrees, from 0 to 360) from the major
     axis (defined by ‘--position-angle’).  The radial profile will
     therefore be created on a wedge-like shape, not the full
     circle/ellipse.  The pixel containing the center of the profile
     will always be included in the profile (because it contains all
     azimuthal angles!).

     If the first angle is _smaller_ than the second (for example,
     ‘--azimuth=10,80’), the region between, or _inside_, the two angles
     will be used.  Otherwise (for example, ‘--azimuth=80,10’), the
     region _outside_ the two angles will be used.  The latter case can
     be useful when you want to ignore part of the 2D shape (for
     example, due to a bright star that can be contaminating it).

     You can visually see the shape of the region used by running this
     script with ‘--keeptmp’ and viewing the ‘values.fits’ and
     ‘apertures.fits’ files of the temporary directory with a FITS image
     viewer like *note SAO DS9::.  You can use *note Viewing FITS file
     contents with DS9 or TOPCAT:: to open them together in one instance
     of DS9, with both frames matched and locked (for easy comparison in
     case you want to zoom-in or out).  For example, see the commands
     below (based on your target object, just change the image name,
     center, position angle, etc.):

          ## Generate the radial profile
          $ astscript-radial-profile image.fits --center=1.234,6.789 \
                      --mode=wcs --rmax=50 --position-angle=20 \
                      --axis-ratio=0.8 --azimuth=95,150 --keeptmp \
                      --tmpdir=radial-tmp

          ## Visually check the values and apertures used.
          $ astscript-fits-view radial-tmp/values.fits \
                                radial-tmp/apertures.fits

‘-m STR’
‘--measure=STR’
     The operator for measuring the values over each radial distance.
     The values given to this option will be directly passed to *note
     MakeCatalog::.  As a consequence, all MakeCatalog measurements like
     the magnitude, magnitude error, median, mean, signal-to-noise ratio
     (S/N), std, surface brightness, sigclip-mean, and sigclip-number
     can be used here.  For a full list of MakeCatalog's measurements,
     please run ‘astmkcatalog --help’ or see *note MakeCatalog
     measurements::.  Multiple values can be given to this option, each
     separated by a comma.  This option can also be called multiple
     times.

     *Masking background/foreground objects:* For crude rejection of
     outliers, you can use sigma-clipping using MakeCatalog measurements
     like ‘--sigclip-mean’ or ‘--sigclip-mean-sb’ (see *note MakeCatalog
     measurements::).  To properly mask the effect of
     background/foreground objects from your target object's radial
     profile, you can use ‘astscript-psf-stamp’ script, see *note
     Invoking astscript-psf-stamp::, and feed it the output of *note
     Segment::.  This script will mask unwanted objects from the image
     that is later used to measure the radial profile.

     Some measurements by MakeCatalog require a per-pixel sky standard
     deviation (for example, magnitude error or S/N). Therefore when
     asking for such measurements, use the ‘--instd’ option (described
     below) to specify the per-pixel sky standard deviation over each
     pixel.  For other measurements like the magnitude or surface
     brightness, MakeCatalog will need a Zero point, which you can set
     with the ‘--zeropoint’ option.

     For example, by setting ‘--measure=mean,sigclip-mean
     --measure=median’, the mean, sigma-clipped mean and median values
     will be computed.  The output radial profile will have 4 columns in
     this order: radial distance, mean, sigma-clipped and median.  By
     default (when this option is not given), the mean of all pixels at
     each radial position will be computed.

‘-s FLT,FLT’
‘--sigmaclip=FLT,FLT’
     Sigma clipping parameters: only relevant if sigma-clipping
     operators are requested by ‘--measure’.  For more on
     sigma-clipping, see *note Sigma clipping::.  If given, the value to
     this option is directly passed to the ‘--sigmaclip’ option of *note
     MakeCatalog::, see *note MakeCatalog inputs and basic settings::.
     By default (when this option is not given), the default values
     within MakeCatalog will be used.  To see the default value of this
     option in MakeCatalog, you can run this command:

          $ astmkcatalog -P | grep " sigmaclip "

‘-z FLT’
‘--zeropoint=FLT’
     The Zero point of the input dataset.  This is necessary when you
     request measurements like magnitude, or surface brightness.

‘-Z’
‘--zeroisnotblank’
     Account for zero-valued pixels in the profile.  By default, such
     pixels are not considered (when this script crops the necessary
     region of the image before generating the profile).  The long
     format of this option is identical to a similarly named option in
     Crop (see *note Invoking astcrop::).  When this option is called,
     it is passed directly to Crop, therefore the zero-valued pixels are
     not considered as blank and used in the profile creation.

‘-i FLT/STR’
‘--instd=FLT/STR’
     Sky standard deviation as a single number (FLT) or as the filename
     (STR) containing the image with the std value for each pixel (the
     HDU within the file should be given to the ‘--stdhdu’ option
     mentioned below).  This is only necessary when the requested
     measurement (value given to ‘--measure’) by MakeCatalog needs the
     Standard deviation (for example, the signal-to-noise ratio or
     magnitude error).  If your measurements do not require a standard
     deviation, it is best to ignore this option (because it will slow
     down the script).

‘-d INT/STR’
‘--stdhdu=INT/STR’
     HDU/extension of the sky standard deviation image specified with
     ‘--instd’.

‘-t STR’
‘--tmpdir=STR’
     Several intermediate files are necessary to obtain the radial
     profile.  All of these temporal files are saved into a temporal
     directory.  With this option, you can directly specify this
     directory.  By default (when this option is not called), it will be
     built in the running directory and given an input-based name.  If
     the directory does not exist at run-time, this script will create
     it.  Once the radial profile has been obtained, this directory is
     removed.  You can disable the deletion of the temporary directory
     with the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not delete the temporary directory (see description of
     ‘--tmpdir’ above).  This option is useful for debugging.  For
     example, to check that the profiles generated for obtaining the
     radial profile have the desired center, shape and orientation.

‘--cite’
     Give BibTeX and acknowledgment information for citing this script
     within your paper.  For more, see ‘Operating mode options’.


File: gnuastro.info,  Node: SAO DS9 region files from table,  Next: Viewing FITS file contents with DS9 or TOPCAT,  Prev: Generate radial profile,  Up: Installed scripts

10.3 SAO DS9 region files from table
====================================

Once your desired catalog (containing the positions of some objects) is
created (for example, with *note MakeCatalog::, *note Match::, or *note
Table::) it often happens that you want to see your selected objects on
an image for a feeling of the spatial properties of your objects.  For
example, you want to see their positions relative to each other.

   In this section we describe a simple installed script that is
provided within Gnuastro for converting your given columns to an SAO DS9
region file to help in this process.  SAO DS9(1) is one of the most
common FITS image visualization tools in astronomy and is free software.

* Menu:

* Invoking astscript-ds9-region::  How to call astscript-ds9-region

   ---------- Footnotes ----------

   (1) <http://ds9.si.edu>


File: gnuastro.info,  Node: Invoking astscript-ds9-region,  Prev: SAO DS9 region files from table,  Up: SAO DS9 region files from table

10.3.1 Invoking astscript-ds9-region
------------------------------------

This installed script will read two positional columns within an input
table and generate an SAO DS9 region file to visualize the position of
the given objects over an image.  For more on installed scripts please
see (see *note Installed scripts::).  This script can be used with the
following general template:

     ## Use the RA and DEC columns of 'table.fits' for the region file.
     $ astscript-ds9-region table.fits --column=RA,DEC \
                            --output=ds9.reg

     ## Select objects with a magnitude between 18 to 20, and generate the
     ## region file directly (through a pipe), each region with radius of
     ## 0.5 arcseconds.
     $ asttable table.fits --range=MAG,18:20 --column=RA,DEC \
           | astscript-ds9-region --column=1,2 --radius=0.5

     ## With the first command, select objects with a magnitude of 25 to 26
     ## as red regions in 'bright.reg'. With the second command, select
     ## objects with a magnitude between 28 to 29 as a green region and
     ## show both.
     $ asttable cat.fits --range=MAG_F160W,25:26 -cRA,DEC \
           | astscript-ds9-region -c1,2 --color=red -obright.reg
     $ asttable cat.fits --range=MAG_F160W,28:29 -cRA,DEC \
           | astscript-ds9-region -c1,2 --color=green \
                         --command="ds9 image.fits -regions bright.reg"

   The input can either be passed as a named file, or from standard
input (a pipe).  Only the ‘--column’ option is mandatory (to specify the
input table columns): two columns from the input table must be
specified, either by name (recommended) or number.  You can optionally
also specify the region's radius, width and color of the regions with
the ‘--radius’, ‘--width’ and ‘--color’ options, otherwise default
values will be used for these (described under each option).

   The created region file will be written into the file name given to
‘--output’.  When ‘--output’ is not called, the default name of
‘ds9.reg’ will be used (in the running directory).  If the file exists
before calling this script, it will be overwritten, unless you pass the
‘--dontdelete’ option.  Optionally you can also use the ‘--command’
option to give the full command that should be run to execute SAO DS9
(see example above and description below).  In this mode, the created
region file will be deleted once DS9 is closed (unless you pass the
‘--dontdelete’ option).  A full description of each option is given
below.

‘-h INT/STR’
‘--hdu INT/STR’
     The HDU of the input table when a named FITS file is given as
     input.  The HDU (or extension) can be either a name or number
     (counting from zero).  For more on this option, see *note Input
     output options::.

‘-c STR,STR’
‘--column=STR,STR’
     Identifiers of the two positional columns to use in the DS9 region
     file from the table.  They can either be in WCS (RA and Dec) or
     image (pixel) coordinates.  The mode can be specified with the
     ‘--mode’ option, described below.

‘-n STR’
‘--namecol=STR’
     The column containing the name (or label) of each region.  The type
     of the column (numeric or a character-based string) is irrelevant:
     you can use both types of columns as a name or label for the
     region.  This feature is useful when you need to recognize each
     region with a certain ID or property (for example, magnitude or
     redshift).

‘-m wcs|img’
‘--mode=wcs|org’
     The coordinate system of the positional columns (can be either
     ‘--mode=wcs’ and ‘--mode=img’).  In the WCS mode, the values within
     the columns are interpreted to be RA and Dec.  In the image mode,
     they are interpreted to be pixel X and Y positions.  This option
     also affects the interpretation of the value given to ‘--radius’.
     When this option is not explicitly given, the columns are assumed
     to be in WCS mode.

‘-C STR’
‘--color=STR’
     The color to use for created regions.  These will be directly
     interpreted by SAO DS9 when it wants to open the region file so it
     must be recognizable by SAO DS9.  As of SAO DS9 8.2, the recognized
     color names are ‘black’, ‘white’, ‘red’, ‘green’, ‘blue’, ‘cyan’,
     ‘magenta’ and ‘yellow’.  The default color (when this option is not
     called) is ‘green’

‘-w INT’
‘--width=INT’
     The line width of the regions.  These will be directly interpreted
     by SAO DS9 when it wants to open the region file so it must be
     recognizable by SAO DS9.  The default value is ‘1’.

‘-r FLT’
‘--radius=FLT’
     The radius of all the regions.  In WCS mode, the radius is assumed
     to be in arc-seconds, in image mode, it is in pixel units.  If this
     option is not explicitly given, in WCS mode the default radius is 1
     arc-seconds and in image mode it is 3 pixels.

‘--dontdelete’
     If the output file name exists, abort the program and do not
     over-write the contents of the file.  This option is thus good if
     you want to avoid accidentally writing over an important file.
     Also, do not delete the created region file when ‘--command’ is
     given (by default, when ‘--command’ is given, the created region
     file will be deleted after SAO DS9 closes).

‘-o STR’
‘--output=STR’
     Write the created SAO DS9 region file into the name given to this
     option.  If not explicitly given on the command-line, a default
     name of ‘ds9.reg’ will be used.  If the file already exists, it
     will be over-written, you can avoid the deletion (or over-writing)
     of an existing file with the ‘--dontdelete’.

‘--command="STR"’
     After creating the region file, run the string given to this option
     as a command-line command.  The SAO DS9 region command will be
     appended to the end of the given command.  Because the command will
     mostly likely contain white-space characters it is recommended to
     put the given string in double quotations.

     For example, let's assume ‘--command="ds9 image.fits -zscale"’.
     After making the region file (assuming it is called ‘ds9.reg’), the
     following command will be executed:

          ds9 image.fits -zscale -regions ds9.reg

     You can customize all aspects of SAO DS9 with its command-line
     options, therefore the value of this option can be as long and
     complicated as you like.  For example, if you also want the image
     to fit into the window, this option will be: ‘--command="ds9
     image.fits -zscale -zoom to fit"’.  You can see the SAO DS9
     command-line descriptions by clicking on the "Help" menu and
     selecting "Reference Manual".  In the opened window, click on
     "Command Line Options".


File: gnuastro.info,  Node: Viewing FITS file contents with DS9 or TOPCAT,  Next: Zero point estimation,  Prev: SAO DS9 region files from table,  Up: Installed scripts

10.4 Viewing FITS file contents with DS9 or TOPCAT
==================================================

The FITS definition allows for multiple extensions (or HDUs) inside one
FITS file.  Each HDU can have a completely independent dataset inside of
it.  One HDU can be a table, another can be an image and another can be
another independent image.  For example, each image HDU can be one CCD
of a multi-CCD camera, or in processed images one can be the deep
science image and the next can be its weight map, alternatively, one HDU
can be an image, and another can be the catalog/table of objects within
it.

   The most common software for viewing FITS images is SAO DS9 (see
*note SAO DS9::) and for plotting tables, TOPCAT is the most commonly
used tool in astronomy (see *note TOPCAT::).  After installing them (as
described in the respective appendix linked in the previous sentence),
you can open any number of FITS images or tables with DS9 or TOPCAT with
the commands below:

     $ ds9 image-a.fits image-b.fits
     $ topcat table-a.fits table-b.fits

   But usually the default mode is not enough.  For example, in DS9, the
window can be too small (not covering the height of your monitor), you
probably want to match and lock multiple images, you have a favorite
color map that you prefer to use, or you may want to open a
multi-extension FITS file as a cube.

   Using the simple commands above, you need to manually do all these in
the DS9 window once it opens and this can take several tens of seconds
(which is enough to distract you from what you wanted to inspect).  For
example, if you have a multi-extension file containing 2D images, one
way to load and switch between each 2D extension is to take the
following steps in the SAO DS9 window: "File"→"Open Other"→"Open Multi
Ext Cube" and then choose the Multi extension FITS file in your
computer's file structure.

   The method above is a little tedious to do every time you want view a
multi-extension FITS file.  A different series of steps is also
necessary if you the extensions are 3D data cubes (since they are
already cubes, and should be opened as multi-frame).  Furthermore, if
you have multiple images and want to "match" and "lock" them (so when
you zoom-in to one, all get zoomed-in) you will need several other
sequence of menus and clicks.

   Fortunately SAO DS9 also provides command-line options that you can
use to specify a particular behavior before/after opening a file.  One
of those options is ‘-mecube’ which opens a FITS image as a
multi-extension data cube (treating each 2D extension as a slice in a 3D
cube).  This allows you to flip through the extensions easily while
keeping all the settings similar.  Just to avoid confusion, note that
SAO DS9 does not follow the GNU style of separating long and short
options as explained in *note Arguments and options::.  In the GNU
style, this 'long' (multi-character) option should have been called like
‘--mecube’, but SAO DS9 follows its own conventions.

   For example, try running ‘$ds9 -mecube foo.fits’ to see the effect
(for example, on the output of *note NoiseChisel::).  If the file has
multiple extensions, a small window will also be opened along with the
main DS9 window.  This small window allows you to slide through the
image extensions of ‘foo.fits’.  If ‘foo.fits’ only consists of one
extension, then SAO DS9 will open as usual.

   On the other hand, for visualizing the contents of tables (that are
also commonly stored in the FITS format), you need to call a different
software (most commonly, people use TOPCAT, see *note TOPCAT::).  And to
make things more inconvenient, by default both of these are only
installed as command-line software, so while you are navigating in your
GUI, you need to open a terminal there, and run these commands.  All of
the issues above are the founding purpose of the installed script that
is introduced in *note Invoking astscript-fits-view::.

* Menu:

* Invoking astscript-fits-view::  How to call this script


File: gnuastro.info,  Node: Invoking astscript-fits-view,  Prev: Viewing FITS file contents with DS9 or TOPCAT,  Up: Viewing FITS file contents with DS9 or TOPCAT

10.4.1 Invoking astscript-fits-view
-----------------------------------

Given any number of FITS files, this script will either open SAO DS9
(for images or cubes) or TOPCAT (for tables) to visualize their contents
in a graphic user interface (GUI). For more on installed scripts please
see (see *note Installed scripts::).  This script can be used with the
following general template:

     $ astscript-fits-view [OPTION] input.fits [input-b.fits ...]

One line examples

     ## Call TOPCAT to load all the input FITS tables.
     $ astscript-fits-view table-*.fits

     ## Call SAO DS9 to open all the input FITS images.
     $ astscript-fits-view image-*.fits

   This script will use Gnuastro's *note Fits:: program to see if the
file is a table or image.  If the first input file contains an image
HDU, then the sequence of files will be given to *note SAO DS9::.
Otherwise, the input(s) will be given to *note TOPCAT:: to visualize
(plot) as tables.  When opening DS9 it will also inspect the
dimensionality of the first image HDU of the first input and open it
slightly differently when the input is 2D or 3D:

2D
     DS9's ‘-mecube’ will be used to open all the 2D extensions of each
     input file as a "Multi-extension cube".  A "Cube" window will also
     be opened with DS9 that can be used to slide/flip through each
     extensions.  When multiple files are given, each file will be in
     one "frame".

3D
     DS9's ‘-multiframe’ option will be used to open all the extensions
     in a separate "frame" (since each input is already a 3D cube, the
     ‘-mecube’ option can be confusing).  To flip through the extensions
     (while keeping the slice fixed), click the "frame" button on the
     top row of buttons, then use the last four buttons of the bottom
     row ("first", "previous", "next" and "last") to change between the
     extensions.  If multiple files are given, there will be a separate
     frame for each HDU of each input (each HDU's name or number will be
     put in square brackets after its name).

*Double-clicking on FITS file to open DS9 or TOPCAT:* for those graphic
user interface (GUI) that follow the freedesktop.org standards
(including GNOME, KDS Plasma, or Xfce) Gnuastro installs a
‘fits-view.desktop’ file to instruct your GUI to call this script for
opening FITS files when you click on them.  To activate this feature
take the following steps:
  1. Run the following command, while replacing ‘PREFIX’.  If you do not
     know what to put in ‘PREFIX’, run ‘which astfits’ on the
     command-line, and extract ‘PREFIX’ from the output (the string
     before ‘/bin/astfits’).  For more, see *note Installation
     directory::.
          ln -sf PREFIX/share/gnuastro/astscript-fits-view.desktop \
                 ~/.local/share/applications/
  2. Right-click on a FITS file, and choose these items in order (based
     on GNOME, may be different in KDE or Xfce): "Open with other
     application"→"View all applications"→"astscript-fits-view".

This script takes the following options

‘-h STR’
‘--hdu=STR’
     The HDU (extension) of the input dataset to display.  The value can
     be the HDU name or number (the first HDU is counted from 0).

‘-p STR’
‘--prefix=STR’
     Directory to search for SAO DS9 or TOPCAT's executables (assumed to
     be ‘ds9’ and ‘topcat’).  If not called they will be assumed to be
     present in your ‘PATH’ (see *note Installation directory::).  If
     you do not have them already installed, their installation
     directories are available in *note SAO DS9:: and *note TOPCAT::
     (they can be installed in non-system-wide locations that do not
     require administrator/root permissions).

‘-s STR’
‘--ds9scale=STR’
     The string to give to DS9's ‘-scale’ option.  You can use this
     option to use a different scaling.  The Fits-view script will place
     ‘-scale’ before your given string when calling DS9.  If you do not
     call this option, the default behavior is to cal DS9 with: ‘-scale
     mode zscale’ or ‘--ds9scale="mode zscale"’ when using this script.

     The Fits-view script has the following aliases to simplify the
     calling of this option (and avoid the double-quotations and ‘mode’
     in the example above):

     ‘zscale’
          or ‘--ds9scale=zscale’ equivalent to ‘--ds9scale="mode
          zscale"’.
     ‘minmax’
          or ‘--ds9scale=minmax’ equivalent to ‘--ds9scale="mode
          minmax"’.

‘-c=FLT,FLT’
‘--ds9center=FLT,FLT’
     The central coordinate for DS9's view of the FITS image after it
     opens.  This is equivalent to the "Pan" button in DS9.  The nature
     of the coordinates will be determined by the ‘--ds9mode’ option
     that is described below.

‘-O img/wcs’
‘--ds9mode=img/wcs’
     The coordinate system (or mode) to interpret the values given to
     ‘--ds9center’.  This can either be ‘img’ (or DS9's "Image"
     coordinates) or ‘wcs’ (or DS9's "wcs fk5" coordinates).

‘-g INTxINT’
‘--ds9geometry=INTxINT’
     The initial DS9 window geometry (value to DS9's ‘-geometry’
     option).

‘-m’
‘--ds9colorbarmulti’
     Do not show a single color bar for all the loaded images.  By
     default this script will call DS9 in a way that a single color bar
     is shown for any number of images.  A single color bar is preferred
     for two reasons: 1) when there are a lot of images, they consume a
     large fraction of the display area.  2) the color-bars are locked
     by this script, so there is no difference between!  With this
     option, you can have separate color bars under each image.


File: gnuastro.info,  Node: Zero point estimation,  Next: Pointing pattern simulation,  Prev: Viewing FITS file contents with DS9 or TOPCAT,  Up: Installed scripts

10.5 Zero point estimation
==========================

Through the "zero point", we are able to give physical units to the
pixel values of an image (often in units of "counts" or ADUs) and thus
compare them with other images (as well as measurements that are done on
them).  The zero point is therefore an important calibration of pixel
values (as astromerty is a calibration of the pixel positions).  The
fundamental concepts behind the zero point are described in *note
Brightness flux magnitude::.  We will therefore not go deeper into the
basics here and stick to the practical aspects of it.

   The purpose of Gnuastro’s ‘astscript-zeropoint’ script is to obtain
the zero point of an image by considering another image (where the zero
point is already known), or a catalog.  In the The operation involves
multiple lower-level programs in a standard series of steps.  For
example, when using another image, the script will take the following
steps:

  1. Download the Gaia catalog that overlaps with the input image using
     Gnuastro’s Query program (see *note Query::).  This is done to
     determine the stars within the image(1).
  2. Perform aperture photometry(2) with *note MakeProfiles:: *note
     MakeCatalog::.  We will assume a zero point of 0 for the input
     image.  If the reference is an image, then we should perform
     aperture photometry also in that image.
  3. Match the two catalogs(3) with *note Match::.
  4. The difference between the input and reference magnitudes should be
     independent of the magnitude of the stars.  This does not hold when
     the stars are saturated in one/both the images (giving us a
     bright-limit for the magnitude range to use) or for stars fainter
     than a certain magnitude, where the signal-to-noise ratio drops
     significantly in one/both images (giving us a faint limit for the
     magnitude range to use).
  5. Since a zero point of 0 was used for the input image, the magnitude
     difference above (in the reliable magnitude range) is the zero
     point of the input image.

   In the "Tutorials" chapter of this Gnuastro book, there are two
tutorials dedicated to the usage of this script.  The first uses an
image as a reference (*note Zero point tutorial with reference image::)
and the second uses a catalog (*note Zero point tutorial with reference
catalog::).  For the full set of options an a detailed description of
each, see *note Invoking astscript-zeropoint::.

* Menu:

* Invoking astscript-zeropoint::  How to call the script

   ---------- Footnotes ----------

   (1) Stars have an almost identical shape in the image (as opposed to
galaxies for example), using confirmed stars will produce a more
reliable result.

   (2) For a complete tutorial on aperture photometry, see *note
Aperture photometry::.

   (3) For a tutorial on matching catalogs, see *note Matching
catalogs::).


File: gnuastro.info,  Node: Invoking astscript-zeropoint,  Prev: Zero point estimation,  Up: Zero point estimation

10.5.1 Invoking astscript-zeropoint
-----------------------------------

This installed script will calculate the zero point of an input image to
calibrate it.  A general overview of this script has been published in
Eskandarlou et al.  2023 (https://arxiv.org/abs/2312.04263); please cite
it if this script proves useful in your research.  The reference can be
an image or catalog (which have been previously calibrated) The
executable name is ‘astscript-zeropoint’, with the following general
template:

     ## Using a reference image in four apertures.
     $ astscript-zeropoint image.fits --hdu=1 \
                           --refimgs=ref-img1.fits,ref-img2.fits \
                           --refimgshdu=1,1 \
                           --refimgszp=22.5,22.5 \
                           --aperarcsec=1.5,2,2.5,3 \
                           --magnituderange=16,18 \
                           --output=output.fits

     ## Using a reference catalog
     $ astscript-zeropoint image.fits --hdu=1 \
                           --refcat=cat.fits \
                           --refcathdu=1 \
                           --aperarcsec=1.5,2,2.5,3 \
                           --magnituderange=16,18 \
                           --output=output.fits

   To learn more about the core concepts behind the zero point, please
see *note Brightness flux magnitude::.  For a practical review of how to
optimally use this script and ways to interpret its results, we have two
tutorials: *note Zero point tutorial with reference image:: and *note
Zero point tutorial with reference catalog::.

   To find the zero point of your input image, this script can use a
reference image (that already has a zero point) or a reference catalog
(that just has magnitudes).  In any case, it is mandatory to identify at
least one aperture for aperture photometry over the image (using
‘--aperarcsec’).  If reference image(s) is(are) given, it is mandatory
to specify its(their) zero point(s) using the ‘--refimgszp’ option (it
can take a separate value for each reference image).  When a catalog is
given, it should already contain the magnitudes of the object (you can
specify which column to use).

   This script will not estimate the zero point based on all the objects
in the reference image or catalog.  It will first query Gaia database
and only select objects have a significant parallax (because Gaia's
algorithms sometimes confuse galaxies and stars based on pure
morphology).  You can bypass this step (which needs internet connection
and can only be used on real data, not simulations) using the
‘--starcat’ option described in *note zero point options::.  This script
will then match the catalog of stars (either through Gaia or
‘--starcat’) with the reference catalog and only use them.  If the
reference is an image, it will simply use the stars catalog to do
aperture photometry.

   By default, this script will estimate the number of available threads
and run all independent steps in parallel on those threads.  To control
this behavior (and allow it to only run on a certain number of threads),
you can use the ‘--numthreads’ option.

   During its operation, this script will build a temporary file in the
running directory that will be deleted once it is finished.  The
‘--tmpdir’ option can be used to manually set the temporary directory's
location at any location in your file system.  The ‘--keeptmp’ option
can be used to stop the deletion of that directory (useful for when you
want to debug the script or better understand what it does).

* Menu:

* zero point output::           Format of the output.
* zero point options::          List and details of options.


File: gnuastro.info,  Node: zero point output,  Next: zero point options,  Prev: Invoking astscript-zeropoint,  Up: Invoking astscript-zeropoint

10.5.1.1 astscript-zeropoint output
...................................

The output will be a multi-extension FITS table.  The first table in the
output gives the zero point and its standard deviation for all the
requested apertures.  This gives you the ability to inspect them and
select the best.  The other table(s) give the exact measurements for
each star that was used (if you use ‘--keepzpap’, it will be for all
your apertures, if not, only for the aperture with the smallest standard
deviation).  For a full tutorial on how to interpret the output of this
script, see *note Zero point tutorial with reference image::

   If you just want the estimated zero point with the least standard
deviation, this script will write it as a FITS keyword in the first
table of the output.

‘ZPAPER’
     Read as "Zero Point APERture".  This shows the aperture radius (in
     arcseconds) that had the smallest standard deviation in the
     estimated zero points.
‘ZPVALUE’
     The zero point estimation for the aperture of ‘ZPAPER’.
‘ZPSTD’
     The standard deviation of the zero point (for all the stars used,
     within the aperture of ‘ZPAPER’).
‘ZPMAGMIN’
     The minimum (brightest) magnitude used to estimate the zero point.
‘ZPMAGMAX’
     The maximum (faintest) magnitude used to estimate the zero point.

A simple way to see these keywords, or read the value of one is shown
below.  For more on viewing and manipulating FITS keywords, see *note
Keyword inspection and manipulation::.

     ## See all the keywords written by this script (that start with 'ZP')
     $ astfits out.fits -h1 | grep ^ZP

     ## If you just want the zero point
     $ astfits jplus-zeropoint.fits -h1 --keyvalue=ZPVALUE


File: gnuastro.info,  Node: zero point options,  Prev: zero point output,  Up: Invoking astscript-zeropoint

10.5.1.2 astscript-zeropoint options
....................................

All the operating phases of the this script can be customized through
the options below.

‘-h STR/INT’
‘--hdu=STR/INT’
     The HDU/extension of the input image to use.

‘-o STR’
‘--output=STR’
     The name of the output file produced by this script.  See *note
     zero point output:: for the format of its contents.

‘-N INT’
‘--numthreads=INT’
     The number of threads to use.  By default this script will attempt
     to find the number of available threads at run-time and will use
     them.

‘-a FLT,[FLT]’
‘--aperarcsec=FLT,[FLT]’
     The radius/radii (in arc seconds) of aperture(s) used in aperture
     photometry of the input image.  This option can take many values
     (to check different apertures and find the best for a more accurate
     zero point estimation).  If a reference image is used, the same
     aperture radii will be used for aperture photometry there.

‘-M FLT,FLT’
‘--magnituderange=FLT,FLT’
     Range of the magnitude for finding the best aperture and zero
     point.  Very bright stars get saturated and fainter stars are
     affected too much by noise.  Therefore, it is important to limit
     the range of magnitudes used in estimating the zero point.  A full
     tutorial is given in *note Zero point tutorial with reference
     image::.

‘-S STR’
‘--starcat=STR’
     Name of catalog containing the RA and Dec of positions for aperture
     photometry in the input image and reference (catalog or image).  If
     not given, the Gaia database will be queried for all stars that
     overlap with the input image (see *note Available databases::).

     This option is therefore useful in the following scenarios (among
     others):
        • No internet connection.
        • Many images having a major overlap in the sky, making it
          inefficient to query Gaia for every image separately: you can
          query the larger area (containing all the images) once, and
          directly give the downloaded table to all the separate runs of
          this script.  Especially if the field is wide, the download
          time can be the slowest part of this script.
        • In simulations (where you have a pre-defined list of stars).

     Through the ‘--starcathdu’, ‘--starcatra’ and ‘--starcatdec’
     options described below, you can specify the HDU, RA column and Dec
     Column within this file.

     The reference image or catalog probably have many objects that are
     not stars.  But it is only stars that have the same shape (the PSF)
     across the image(1).  Therefore

‘--starcathdu=STR/INT’
     The HDU name or number in file given to ‘--starcat’ (described
     above) that contains the table of RA and Dec positions for aperture
     photometry.  If not given, it is assumed that the table is in HDU
     number 1 (counting from 0).

‘--starcatra=STR/INT’
     The column name or number (in the table given to ‘--starcat’) that
     contains the Right Ascension.

‘--starcatdec=STR/INT’
     The column name or number (in the table given to ‘--starcat’) that
     contains the Declination.

‘-c STR’
‘--refcat=STR’
     Reference catalog used to estimate the zero point of the input
     image.  This option is mutually exclusive with (cannot be given at
     the same time as) ‘--refimgs’.  This catalog should have RA, Dec
     and Magnitude of the stars (that match with Gaia or ‘--starcat’).

‘-C STR/INT’
‘--refcathdu=STR/INT’
     The HDU/extension of the reference catalog will be calculated.

‘-r STR’
‘--refcatra=STR’
     Right Ascension column name of the reference catalog.

‘-d STR’
‘--refcatdec=STR’
     Declination column name of the reference catalog.

‘-m STR’
‘--refcatmag=STR’
     Magnitude column name of the reference catalog.

‘-s FLT’
‘--matchradius=FLT’
     Matching radius of stars (in arc seconds) and reference catalog in
     arc-seconds.  By default it is 0.2 arc seconds.

‘-R STR,[STR]’
‘--refimgs=STR,[STR]’
     Reference image(s) for estimating the zero point.  This option can
     take any number of separate file names, separated by a comma.  The
     HDUs of each reference image should be given to the ‘refimgshdu’
     option.  In case the images are in separate HDUs of the same file,
     you need to repeat the file name here.  This option is mutually
     exclusive with (cannot be given at the same time as) ‘--refimgs’.

‘-H STR/INT’
‘--refimgshdu=STR/INT’
     HDU/Extension name of number of the reference files.  The number of
     values given to this option should be the same as the number of
     reference image(s).

‘-z FLT,[FLT]’
‘--refimgszp=FLT,[FLT]’
     Zero point of the reference image(s).  The number of values given
     to this should be the same as the number of names given to
     ‘--refimgs’.

‘-K’
‘--keepzpap’
     Keep the table of separate zero points found for each star for all
     apertures.  By default, this table is only present for the aperture
     that had the least standard deviation in the estimated zero point.

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run$-$time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Its recommended to not remove the temporary directory (see
     description of ‘--keeptmp’).  This option is useful for debugging
     and checking the outputs of internal steps.

‘--mksrc=STR’
     Use a non-standard Makefile for the Makefile to call.  This option
     is primarily useful during the development of this script and its
     Makefile, not for normal/regular user.  So if you are not
     developing this script, you can safely ignore this option.  When
     this option is given, the default installed Makefile will not be
     used: the file given to this option will be read by ‘make’ (within
     the script) instead.

‘--cite’
     Give BibTeX and acknowledgment information for citing this script
     within your paper.  For more, see ‘Operating mode options’.

   ---------- Footnotes ----------

   (1) The PSF itself can vary across the field of view; but that is
second-order for this analysis.


File: gnuastro.info,  Node: Pointing pattern simulation,  Next: Color images with gray faint regions,  Prev: Zero point estimation,  Up: Installed scripts

10.6 Pointing pattern simulation
================================

Astronomical images are often composed of many single exposures.  When
the science topic does not depend on the time of observation (for
example galaxy evolution), after completing the observations, we stack
those single exposures into one "deep" image.  Designing the strategy to
take those single exposures is therefore a very important aspect of
planning your astronomical observation.  There are many reasons for
taking many short exposures instead of one long exposure:

   • Modern astronomical telescopes have very high precision (with
     pixels that are often much smaller than an arc-second or 1/3600
     degrees.  However, the Earth is orbiting the Sun at a very high
     speed of roughly 15 degrees every hour!  Keeping the (often very
     large!)  telescopes in track with this fast moving sky is not easy;
     such that most cannot continue accurate tracking more than 10
     minutes.
   • For ground-based observations, the turbulence of the atmosphere
     changes very fast (on the scale of minutes!).  So if you plan to
     observe at 10 minutes and at the start of your observations the
     seeing is good, it may happen that on the 8th minute, it becomes
     bad.  This will affect the quality of your final exposure!
   • When an exposure is taken, the instrument/environment imprint a lot
     of artifacts on it.  One common example that we also see in normal
     cameras is vignetting (https://en.wikipedia.org/wiki/Vignetting);
     where the center receives a larger fraction of the incoming light
     than the periphery).  In order to characterize and remove such
     artifacts (which depend on many factors at the precision that we
     need in astronomy!), we need to take many exposures of our science
     target.
   • By taking many exposures we can build a stack that has a higher
     resolution; this is often done in under-sampled data, like those in
     the Hubble Space Telescope (HST) or James Webb Space Telescope
     (JWST).
   • The scientific target can be larger than the field of view of your
     telescope and camera.

   In the jargon of observational astronomers, each exposure is also
known as a "dither" (literally/generally meaning "trembling" or
"vibration").  This name was chosen because two exposures are not
usually taken on exactly the same position of the sky (known as
"pointing").  In order to improve all the item above, we often move the
center of the field of view from one exposure to the next.  In most
cases this movement is small compared to the field of view, so most of
the central part of the final stack has a fixed depth, but the edges are
shallower (conveying a sense of vibration).  When the spacing between
pointings is large, they are known as an "offset".  A "pointing" is used
to refer to either a dither or an offset.

   For example see Figures 3 and 4 of Illingworth et al.  2013
(https://arxiv.org/pdf/1305.1931.pdf) which show the exposures that went
into the XDF survey.  The pointing pattern can also be large compared to
the field of view, for example see Figure 1 of Trujillo et al.  2021
(https://arxiv.org/pdf/2109.07478.pdf), which show the pointing strategy
for the LIGHTS survey.  These types of images (where each pixel contains
the number of exposures, or time, that were used in it) are known as
exposure maps.

   The pointing pattern therefore is strongly defined by the science
case (high-level purpose of the observation) and your telescope's field
of view.  For example in the XDF survey is focused on very high redshift
(most distant!)  galaxies.  These are very small objects and within that
small footprint (of just 1 arcmin) we have thousands of them.  However,
the LIGHTS survey is focused on the halos of large nearby galaxies (that
can be more than 10 arcminutes wide!).

   In *note Invoking astscript-pointing-simulate:: of Gnuastro's *note
Installed scripts:: is described in detail.  This script is designed to
simplify the process of selecting the best pointing pattern for your
observation strategy.  For a practical tutorial on using this script,
see *note Pointing pattern design::.

* Menu:

* Invoking astscript-pointing-simulate::  Options and running mode.


File: gnuastro.info,  Node: Invoking astscript-pointing-simulate,  Prev: Pointing pattern simulation,  Up: Pointing pattern simulation

10.6.1 Invoking astscript-pointing-simulate
-------------------------------------------

This installed script will simulate a final stacked image from a certain
pointing pattern (given as a table).  A general overview of this script
has been published in Akhlaghi (2023)
(https://ui.adsabs.harvard.edu/abs/2023RNAAS...7..211A); please cite it
if this script proves useful in your research.  The executable name is
‘astscript-pointing-simulate’, with the following general template:

     $ astscript-pointing-simulate [OPTION...] pointings.fits

Examples (for a tutorial, see *note Pointing pattern design::):

     $ astscript-pointing-simulate pointing.fits --output=stack.fits \
                --img=image.fits --center=10,10 --width=1,1

   The default output of this script is a stacked image that results
from placing the given image (given to ‘--img’) in the pointings of a
pointing pattern.  The Right Ascension (RA) and Declination (Dec) of
each pointing is given in the main input catalog (‘pointing.fits’ in the
example above).  The center and width of the final stack (both in
degrees by default) should be specified using the ‘--width’ option.
Therefore, in order to successfully run, this script at least needs the
following four inputs:
Pointing positions
     A table containing the RA and Dec of each pointing (the only input
     argument).  The actual column names that contain them can be set
     with the ‘--racol’ and ‘--deccol’ options (see below).
An image
     This is used for its distortion and rotation, its pixel values and
     position on the sky will be ignored.  The file containing the image
     should be given to the ‘--img’ option.
Stack's central coordinate
     The central RA and Dec of the finally produced stack (given to the
     ‘--center’ option).
Stack's width
     The width (in degrees) of the final stack (given to the ‘--width’
     option).

   This script will ignore the pixel values of the reference image
(given to ‘--img’) and the Reference coordinates (values to ‘CRVAL1’ and
‘CRVAL2’ in its WCS keywords).  For each pointing pointing, this script
will put the given RA and Dec into the ‘CRVAL1’ and ‘CRVAL2’ keywords of
a copy of the input (not changing the input in anyway), and reset that
input's pixel values to 1.  The script will then warp the modified copy
into the final pixel grid (correcting any rotation and distortions that
are used from the original input).  This process is done for all the
pointing points in parallel.  Finally, all the exposures in the pointing
list are stacked together to produce an exposure map (showing how many
exposures go into each pixel of the final stack.

   Except for the table of pointing positions, the rest of the inputs
and settings are configured through *note Options::, just note the
restrictions in *note Installed scripts::.

‘-o STR’
‘--output=STR’
     Name of the output.  The output is an image of the requested size
     (‘--width’) and position (‘--center’) in the sky, but each pixel
     will contain the number of exposures that go into it after the
     pointing has been done.  See description above for more.

‘-h STR/INT’
‘--hdu=STR/INT’
     The name or counter (counting from zero; from the FITS standard) of
     HDU containing the table of pointing pointing positions (the file
     name of this table is the main input argument to this script).  For
     more, see the description of this option in *note Input output
     options::.

‘-i STR’
‘--img=STR’
     The references image.  The pixel values and central location in
     this image will be ignored by the script.  The only relevant
     information within this script are the WCS properties (except for
     ‘CRVAL1’ and ‘CRVAL2’, which connect it to a certain position on
     the sky) and image size.  See the description above for more.

‘-H STR/INT’
‘--imghdu=STR/INT’
     The name or counter (counting from zero; from the FITS standard) of
     the HDU containing the reference image (file name should be given
     to the ‘--img’ option).  If not given, a default value of ‘1’ is
     assumed; so this is not a mandatory option.

‘-r STR/INT’
‘--racol=STR/INT’
     The name or counter (counting from 1; from the FITS standard) of
     the column containing the Right Ascension (RA) of each pointing to
     be used in the pointing pattern.  The file containing the table is
     given to this script as its only argument.

‘-d STR/INT’
‘--deccol=STR/INT’
     The name or counter (counting from 1; from the FITS standard) of
     the column containing the Declination (Dec) of each pointing to be
     used in the pointing pattern.  The file containing the table is
     given to this script as its only argument.

‘-C FLT,FLT’
‘--center=FLT,FLT’
     The central RA and Declination of the final stack in degrees.

‘-w FLT,FLT’
‘--width=FLT,FLT’
     The width of the final stack in degrees.  If ‘--widthinpix’ is
     given, the two values given to this option will be interpreted as
     degrees.

‘--widthinpix’
     Interpret the values given to ‘--width’ as number of pixels along
     each dimension), and not as degrees.

‘--ctype=STR,STR’
     The projection of the output stack (‘CTYPEi’ keyword in the FITS
     WCS standard).  For more, see the description of the same option in
     *note Align pixels with WCS considering distortions::.

‘--hook-warp-before='STR'’
     Command to run before warping each exposure into the output pixel
     grid.  By default, the exposure is immediately warped to the final
     pixel grid, but in some scenarios it is necessary to do some
     operations on the exposure before warping (for example account for
     vignetting; see *note Accounting for non-exposed pixels::).  The
     warping of each exposure is done in parallel by default; therefore
     there are pre-defined variables that you should use for the input
     and output file names of your command:
     ‘$EXPOSURE’
          Input: name of file with the same size as the reference image
          with all pixels having a fixed value of 1.  The WCS has also
          been corrected based on the pointing pattern.
     ‘$TOWARP’
          Output: name of the expected output of your hook.  If it is
          not created by your script, the script will complain and
          abort.  This file will be given to Warp to be warped into the
          output pixel grid.

     For an example of using hooks with an extended discussion, see
     *note Pointing pattern design:: and *note Accounting for
     non-exposed pixels::.

     To develop your command, you can use ‘--hook-warp-before='...; echo
     GOOD; exit 1'’ (where ‘...’ can be replaced by any command) and run
     the script on a single thread (with ‘--numthreads=1’) to produce a
     single file and simplify the checking that your desired operation
     works as expected.  All the files will be within the temporary
     directory (see ‘--tmpdir’).

‘--hook-warp-after='STR'’
     Command to run after the warp of each exposure into the output
     pixel grid, but before the stacking of all exposures.  For more on
     hooks, see the description of ‘--hook-warp-before’, *note Pointing
     pattern design:: and *note Accounting for non-exposed pixels::.

     ‘$WARPED’
          Input: name of file containing the warped exposure in the
          output pixel grid.
     ‘$TOWARP’
          Output: name of the expected output of your hook.  If it is
          not created by your script, the script will complain and
          abort.  This file will be stacked from the same file for all
          exposures into the final output.

‘--stack-operator="STR"’
     The operator to use for stacking the warped individual exposures
     into the final output of this script.  For the full list, see *note
     Stacking operators::.  By default it is the ‘sum’ operator (to
     produce an output exposure map).  For an example usage, see the
     tutorial in *note Pointing pattern design::.

‘--mksrc=STR’
     Use a non-standard Makefile for the Makefile to call.  This option
     is primarily useful during the development of this script and its
     Makefile, not for normal/regular usage.  So if you are not
     developing this script, you can safely ignore this option.  When
     this option is given, the default installed Makefile will not be
     used: the file given to this option will be read by ‘make’ (within
     the script) instead.

‘-t STR’
‘--tmpdir=STR’
     Name of directory containing temporary files.  If not given, a
     temporary directory will be created in the running directory with a
     long name using some of the input options.  By default, this
     temporary directory will be deleted after the output is created.
     You can disable the deletion of the temporary directory (useful for
     debugging!)  with the ‘--keeptmp’ option.

     Using this option has multiple benefits in larger pipelines:
        • You can avoid conflicts in case the used inputs in the default
          name are the same.
        • You can put this directory somewhere else in the running file
          system to avoid mixing output files with your source, or to
          use other storage hardware that are mounted on the running
          file system.

‘-k’
‘--keeptmp’
     Keep the temporary directory (and do not delete it).

‘-?’
‘--help’
     Print a list of all the options, along with a short description and
     context for the program.  For more, see ‘Operating mode options’.

‘-N INT’
‘--numthreads=INT’
     The number of threads to use for parallel operations (warping the
     input into the different pointing points).  If not given (by
     default), the script will try to find the number of available
     threads on the running system and use that.  For more, see
     ‘Operating mode options’.

‘--cite’
     Give BibTeX and acknowledgment information for citing this script
     within your paper.  For more, see ‘Operating mode options’.

‘-q’
‘--quiet’
     Do not print the series of commands or their outputs in the
     terminal.  For more, see ‘Operating mode options’.

‘-V’
‘--version’
     Print the version of the running Gnuastro along with a copyright
     notice and list of authors that contributed to this script.  For
     more, see ‘Operating mode options’.


File: gnuastro.info,  Node: Color images with gray faint regions,  Next: PSF construction and subtraction,  Prev: Pointing pattern simulation,  Up: Installed scripts

10.7 Color images with gray faint regions
=========================================

Typical astronomical images have a very wide range of pixel values and
generally, it is difficult to show the entire dynamical range in a color
image.  For example, by using *note ConvertType::, it is possible to
obtain a color image with three FITS images as each of the
Red-Green-Blue (or RGB) color channels.  However, depending on the pixel
distribution, it could be very difficult to see the different regions
together (faint and bright objects at the same time).  In something like
DS9, you end up changing the color map parameters to see the regions you
are most interested in.

   The reason is that images usually have a lot of faint pixels (near to
the sky background or noise values), and few bright pixels
(corresponding to the center of stars, galaxies, etc.)  that can be
millions of times brighter!  As a consequence, by considering the images
without any modification, it is extremely hard to visualize the entire
range of values in a color image.  This is because standard color
formats like JPEG, TIFF or PDF are defined as 8-bit integer precision,
while astronomical data are usually 32-bit floating point!  To solve
this issue, it is possible to perform some transformations of the images
and then obtain the color image.

   This is actually what the current script does: it makes some
non-linear transformations and then uses Gnuastro's ConvertType to
generate the color image.  There are several parameters and options in
order to change the final output that are described in *note Invoking
astscript-color-faint-gray::.  A full tutorial describing this script
with actual data is available in *note Color images with full dynamic
range::.  A general overview of this script is published in
Infante-Sainz et al.  2024 (https://arxiv.org/abs/2401.03814); please
cite it if this script proves useful in your research.

* Menu:

* Invoking astscript-color-faint-gray::  Details of options and arguments.


File: gnuastro.info,  Node: Invoking astscript-color-faint-gray,  Prev: Color images with gray faint regions,  Up: Color images with gray faint regions

10.7.1 Invoking astscript-color-faint-gray
------------------------------------------

This installed script will consider several images to combine them into
a single color image to visualize the full dynamic range.  The
executable name is ‘astscript-color-faint-gray’, with the following
general template:

     $ astscript-color-faint-gray [OPTION...] r.fits g.fits b.fits

Examples (for a tutorial, see *note Color images with full dynamic
range::):

     ## Generate a color image from three images with default options.
     $ astscript-color-faint-gray r.fits g.fits b.fits -g1 --output color.pdf

     ## Generate a color image, consider the minimum value to be zero.
     $ astscript-color-faint-gray r.fits g.fits b.fits -g1 \
                                --minimum=0.0 --output=color.jpg

     ## Generate a color image considering different zero points, minimum
     ## values, weights, and also increasing the contrast.
     $ astscript-color-faint-gray r.fits g.fits b.fits -g1 \
                           -z=22.4 -z=25.5 -z=24.6 \
                           -m=-0.1 -m=0.0 -m=0.1 \
                           -w=1 -w=2 -w=3 \
                           --contrast=3 \
                           --output=color.tiff


   This script takes three inputs images to generate a RGB color image
as the output.  The order of the images matters, reddest (longest
wavelength) filter (R), green (an intermediate wavelength) filter (G)
and bluest (shortest wavelength).  In astronomy, these can be any filter
(for example from infra-red, radio, optical or x-ray); the "RGB"
designation is from the general definition of colors (see
<https://en.wikipedia.org/wiki/RGB_color_spaces>) These images are
internally manipulated by a series of non-linear transformations and
normalized to homogenize and finally combine them into a color image.
In general, for typical astronomical images, the default output is an
image with bright pixels in color and noise pixels in black.

   The option ‘--minimum’ sets the minimum value to be shown and it is a
key parameter, it uses to be a value close to the sky background level.
The current non-linear transformation is from Lupton et al.  2004
(https://ui.adsabs.harvard.edu/abs/2004PASP..116..133L), which we call
the "asinh" transformation.  The two important parameters that control
this transformation are ‘--qthresh’ and ‘--stretch’.  With the option
‘--coloronly’, it is possible to generate a color image with the
background in black: bright pixels in color and the sky background (or
noise) values in black.  It is possible to provide a fourth image (K)
that will be used for showing the gray region: R, G, B, K

   The generation of a good color image is something that requires
several trials, so we encourage the user to play with the different
parameters cleverly.  After some testing, we find it useful to follow
the steps.  For a more complete description of the logic of the process,
see the dedicated tutorial in *note Color images with full dynamic
range::.

  1. Use the default options to estimate the parameters.  By running the
     script with no options at all, it will estimate the parameters and
     they will be printed on the command-line.
  2. Select a good sky background value of the images.  If the sky
     background has been subtracted, a minimum value of zero could be a
     good option: ‘--minimum=0.0’.
  3. Focus on the bright regions to tweak ‘--qbright’ and ‘--stretch’.
     First, try low values of ‘--qbright’ to show the bright parts.
     Then, adjust ‘--stretch’ to show the fainter regions around bright
     parts.  Overall, play with these two parameters to show the color
     regions appropriately.
  4. Change ‘--colorval’ to separate the color and black regions.  This
     is the lowest value of the threshold image that is shown in color.
  5. Change ‘--grayval’ to separate the black and gray regions.  This is
     highest value of the threshold image that is shown in gray.
  6. Use ‘--checkparams’ to check the pixel value distributions.
  7. Use ‘--keeptmp’ to not remove the threshold image and check it.

A full description of each option is given below:

‘-h’
‘--hdu=STR/INT’
     Input HDU name or counter (counting from 0) for each input FITS
     file.  If the same HDU should be used from all the FITS files, you
     can use the ‘--globalhdu’ option described below to avoid repeating
     this option.

‘-g’
‘--globalhdu=STR/INT’
     Use the value given to this option (a HDU name or a counter,
     starting from 0) for the HDU identifier of all the input FITS
     files.  This is useful when all the inputs are distributed in
     different files, but have the same HDU in those files.

‘-o’
‘--output’
     Output color image name.  The output can be in any of the
     recognized output formats of ConvertType (including PDF, JPEG and
     TIFF).

‘-m’
‘--minimum=FLT’
     Minimum value to be mapped for each R, G, B, and K FITS images.  If
     a single value is given to this option it will be used for all the
     input images.

     This parameter controls the smallest visualized pixel value.  In
     general, it is a good decision to set this value close to the sky
     background level.  This value can dramatically change the output
     color image (especially when there are large negative values in the
     image that you do not intend to visualize).

‘-Z’
‘--zeropoint=FLT’
     Zero point value for each R, G, B, and K FITS images.  If a single
     value is given, it is used for all the input images.

     Internally, the zero point values are used to transform the pixel
     values in units of Janskys.  The units are not important for a
     color image, but the fact that the images are photometrically
     calibrated is important for obtaining an output color image whose
     color distribution is realistic.

‘-w’
‘--weight=FLT’
     Relative weight for each R, G, B channel.  With this parameter, it
     is possible to change the importance of each channel to modify the
     color balance of the image.

     For example, ‘-w=1 -w=2 -w=5’ indicates that the B band will be 5
     times more important than the R band, and that the G band is 2
     times more important than the R channel.  In this particular
     example, the combination will be done as
     $\rm{colored}=(1{\times}\rm{R}+2{\times}\rm{G}+5{\times}\rm{B})/(1
     + 2 + 5)=0.125{\times}\rm{R} + 0.250{\times}\rm{G} +
     0.625{\times}\rm{B}$.

     In principle, a color image should recreate "real" colors, but
     "real" is a very subjective matter and with this option, it is
     possible to change the color balance and make it more aesthetically
     interesting.  However, be careful to avoid confusing the viewers of
     your image and report the weights with the filters you used for
     each channel.  It is up to the user to use this parameter
     carefully.

‘-Q’
‘--qbright=FLT’
     It is one of the parameters that control the asinh transformation.
     It should be used in combination with ‘--stretch’.  In general, it
     has to be set to low values to better show the brightest regions.
     Afterwards, adjust ‘--stretch’ to set the linear stretch (show the
     intermediate/faint structures).

‘-s’
‘--stretch=FLT’
     It is one of the parameters that control the asinh transformation.
     It should be used in combination with ‘--qbright’.  It is used for
     bringing out the faint/intermediate bright structures of the image
     that are shown linearly.  In general, this parameter is chosen
     after setting ‘--qbright’ to a low value.

     *The asinh transformation.*  The asinh transformation is done on
     the stacked R, G, B image.  It consists in the modification of the
     stacked image (I) in order to show the entire dynamical range
     appropriately following the expression: $f(I) = asinh($ ‘qbright’
     $\cdot$ ‘stretch’ $\cdot I) /$ ‘qbright’.  See *note Color images
     with full dynamic range:: for a complete tutorial that shows the
     intricacies of this transformation with step-by-step examples.

‘--coloronly’
     By default, the fainter parts of the image are shown in grayscale
     (not color, since colored noise is not too informative).  With this
     option, the output image will be fully in color with the background
     (noise pixels) in black.

‘--colorval=FLT’
     The value that separates the color and black regions.  By default,
     it ranges from 100 (all pixels becoming in color) to 0 (all pixels
     becoming black).  Check the histogram ‘FOR COLOR and GRAY
     THRESHOLDS’ with the option ‘--checkparams’ for selecting a good
     value.

‘--grayval=FLT’
     This parameter defines the value that separates the black and gray
     regions.  It ranges from 100 (all pixels becoming black) to 0 (all
     pixels becoming white).  Check the histogram ‘FOR COLOR and GRAY
     THRESHOLDS’ with the option ‘--checkparams’ to select the value.

‘--regions=STR’
     Labeled image, identifying the pixels to use for color (value 2),
     those to use for black (value 1) and those to use for gray (value
     0).  When this option is given the ‘--colorval’ and ‘--grayval’
     options will be ignored.  This gives you the freedom to select the
     pixels to show in color, black or gray based on any criteria that
     is relevant for your purpose.  For an example of using this option
     to get a physically motivated threshold, see *note Manually setting
     color-black-gray regions::.

     *IMPORTANT NOTE.* The options ‘--colorval’ and ‘--grayval’ are
     related one to each other.  They are defined from the threshold
     image (an image generated in the temporary directory) named
     ‘colorgray_threshold.fits’.  By default, this image is computed
     from the stack and later asinh-transformation of the three R, G, B
     channels.  Its pixel values range between 100 (brightest) to 0
     (faintest).  The ‘--colorval’ value computed by default is the
     median of this image.  Pixels above this value are shown in color.
     Pixels below this value are shown in gray.  Regions of pure black
     color can be defined with the ‘--grayval’ option if this value is
     between 0 and ‘--colorval’.  In other words.  Color region are
     defined by those pixels between 100 and ‘--colorval’.  Pure black
     region are defined by those pixels between ‘--colorval’ to
     ‘grayval’.  Gray region are defined by those pixels between
     ‘--grayval’ to 0.

     If a fourth image is provided as the "K" channel, then this image
     is used as the threshold image.  See *note Color images with full
     dynamic range:: for a complete tutorial.

‘--colorkernelfwhm=FLT’
     Gaussian kernel FWHM (in pixels) for convolving the color regions.
     Sometimes, a convolution of the color regions (bright pixels) is
     desired to further increase their signal-to-noise ratio (but make
     them look smoother).  With this option, the kernel will be created
     internally and convolved with the colored regions.

‘--graykernelfwhm=FLT’
     Gaussian kernel FWHM (in pixels) for convolving the background
     image.  Sometimes, a convolution of the background image is
     necessary to smooth the noisier regions and increase their
     signal-to-noise ratios.  With this option, the kernel will be
     created internally and convolved with the colored regions.

‘-b’
‘--bias=FLT’
     Change the brightness of the final image.  By increasing this
     value, a pedestal value will be added to the color image.  This
     option is rarely useful, it is most common to use ‘--contrast’, see
     below.

‘-c’
‘--contrast=FLT’
     Change the contrast of the final image.  The transformation is:
     $\rm{output}=\rm{contrast}\times{image}+brightness$.

‘-G’
‘--gamma=FLT’
     Gamma exponent value for a gamma transformation.  This
     transformation is not linear: $\rm{output}=\rm{image}^{gamma}$.
     This option overrides ‘--bias’ or ‘--contrast’.

‘--markoptions=STR’
     Options to draw marks on the final output image.  Anything given to
     this option is passed directly to ConvertType in order to draw
     marks on the output image.  For example, if you construct a table
     named ‘marks.txt’ that contains the column names: x, y, shape,
     size, axis ratio, angle, color; you will execute the script with
     the following option: ‘--markoptions="--marks=markers.txt
     --markcoords=x,y --markshape=shape --marksize=size,axisratio
     --markrotate=angle --markcolor=color"’.  See *note Drawing with
     vector graphics:: for more information on how to draw markers and
     *note Weights contrast markers and other customizations:: for a
     tutorial.

‘--checkparams’
     Print the statistics of intermediate images that are used for
     estimating the parameters.  This option is useful to decide the
     optimum set of parameters.

‘--keeptmp’
     Do not remove the temporary directory.  This is useful for
     debugging and checking the outputs of internal steps.

‘--cite’
     Give BibTeX and acknowledgment information for citing this script
     within your paper.  For more, see ‘Operating mode options’.

‘-q’
‘--quiet’
     Do not print the series of commands or their outputs in the
     terminal.  For more, see ‘Operating mode options’.

‘-V’
‘--version’
     Print the version of the running Gnuastro along with a copyright
     notice and list of authors that contributed to this script.  For
     more, see ‘Operating mode options’.


File: gnuastro.info,  Node: PSF construction and subtraction,  Prev: Color images with gray faint regions,  Up: Installed scripts

10.8 PSF construction and subtraction
=====================================

The point spread function (PSF) describes how the light of a point-like
source is affected by several optical scattering effects (atmosphere,
telescope, instrument, etc.).  Since the light of all astrophysical
sources undergoes all these effects, characterizing the PSF is key in
astronomical analysis (for small and large objects).  Consequently,
having a good characterization of the PSF is fundamental to any
analysis.

   In some situations(1) a parametric (analytical) model is sufficient
for the PSF (such as Gaussian or Moffat, see *note PSF::).  However,
once you are interested in objects that are larger than a handful of
pixels, it is almost impossible to find an analytic function to
adequately characterize the PSF. Therefore, it is necessary to obtain an
empirical (non-parametric) and extended PSF. In this section we describe
a set of installed scrips in Gnuastro that will let you construct the
non-parametric PSF using point-like sources.  They allow you to derive
the PSF from the same astronomical images that the science is derived
from (without assuming any analytical function).

   The scripts are based on the concepts described in Infante-Sainz et
al.  2020 (https://arxiv.org/abs/1911.01430).  But to be complete, we
first give a summary of the logic and overview of their combined usage
in *note Overview of the PSF scripts::.  Furthermore, before going into
the technical details of each script, we encourage you to go through the
tutorial that is devoted to this at *note Building the extended PSF::.
The tutorial uses a real dataset and includes all the logic and
reasoning behind every step of the usage in every installed script.

* Menu:

* Overview of the PSF scripts::  Summary of concepts and methods
* Invoking astscript-psf-select-stars::  Select good starts within an image.
* Invoking astscript-psf-stamp::  Make a stamp of each star to stack.
* Invoking astscript-psf-unite::  Merge stacks of different regions of PSF.
* Invoking astscript-psf-scale-factor::  Calculate factor to scale PSF to star.
* Invoking astscript-psf-subtract::  Put the PSF in the image to subtract.

   ---------- Footnotes ----------

   (1) An example scenario where a parametric PSF may be enough: you are
only interested in very small, high redshift objects that only extended
a handful of pixels.


File: gnuastro.info,  Node: Overview of the PSF scripts,  Next: Invoking astscript-psf-select-stars,  Prev: PSF construction and subtraction,  Up: PSF construction and subtraction

10.8.1 Overview of the PSF scripts
----------------------------------

To obtain an extended and non-parametric PSF, several steps are
necessary and we will go through them here.  The fundamental ideas of
the following methodology are thoroughly described in Infante-Sainz et
al.  2020 (https://arxiv.org/abs/1911.01430).  A full tutorial is also
available in *note Building the extended PSF::.  The tutorial will go
through the full process on a pre-selected dataset, but will describe
the logic behind every step in away that can easily be
modified/generalized to other datasets.

   This section is basically just a summary of that tutorial.  We could
have put all these steps into one large program (installed script),
however this would introduce several problems.  The most prominent of
these problems are:

   • The command would require _many_ options, making it very complex to
     run every time.
   • You usually have many stars in an image, and many of the steps can
     be optimized or parallelized depending on the particular analysis
     scenario.  Predicting all the possible optimizations for all the
     possible usage scenarios would make the code extremely complex
     (filled with many unforeseen bugs!).

   Therefore, following the modularity principle of software
engineering, after several years of working on this, we have broken the
full job into the smallest number of independent steps as separate
scripts.  All scripts are independent of each other, meaning this that
you are free to use all of them as you wish (for example, only some of
them, using another program for a certain step, using them for other
purposes, or running independent parts in parallel).

   For constructing the PSF from your dataset, the first step is to
obtain a catalog of stars within it (you cannot use galaxies to build
the PSF!). But you cannot blindly use all the stars either!  For
example, we do not want contamination from other bright, and nearby
objects.  The first script below is therefore designed for selecting
only good star candidates in your image.  It will use different
criteria, for example, good parallax (where available, to avoid
confusion with galaxies), not being near to bright stars, axis ratio,
etc.  For more on this script, see *note Invoking
astscript-psf-select-stars::.

   Once the catalog of stars is constructed, another script is in charge
of making appropriate stamps of the stars.  Each stamp is a cropped
image of the star with the desired size, normalization of the flux, and
mask of the contaminant objects.  For more on this script, see *note
Invoking astscript-psf-stamp:: After obtaining a set of star stamps,
they can be stacked for obtaining the combined PSF from many stars (for
example, with *note Stacking operators::).

   In the combined PSF, the masked background objects of each star's
image will be covered and the signal-to-noise ratio will increase,
giving a very nice view of the "clean" PSF. However, it is usually
necessary to obtain different regions of the same PSF from different
stars.  For example, to construct the far outer wings of the PSF, it is
necessary to consider very bright stars.  However, these stars will be
saturated in the most inner part, and immediately outside of the
saturation level, they will be deformed due to non-linearity effects.
Consequently, fainter stars are necessary for the inner regions.

   Therefore, you need to repeat the steps above for certain stars (in a
certain magnitude range) to obtain the PSF in certain radial ranges.
For example, in Infante-Sainz et al.  2020
(https://arxiv.org/abs/1911.01430), the final PSF was constructed from
three regions (and thus, using stars from three ranges in magnitude).
In other cases, we even needed four groups of stars!  But in the example
dataset from the tutorial, only two groups are necessary (see *note
Building the extended PSF::).

   Once clean stacks of different parts of the PSF have been constructed
through the steps above, it is therefore necessary to blend them all
into one.  This is done by finding a common radial region in both, and
scaling the inner region by a factor to add with the outer region.  This
is not trivial, therefore, a third script is in charge of it, see *note
Invoking astscript-psf-unite::.

   Having constructed the PSF as described above (or by any other
procedure), it can be scaled to the magnitude of the various stars in
the image to get subtracted (and thus remove the extended/bright wings;
better showing the background objects of interest).  Note that the
absolute flux of a PSF is meaningless (and in fact, it is usually
normalized to have a total sum of unity!), so it should be scaled.  We
therefore have another script that will calculate the scale
(multiplication) factor of the PSF for each star.  For more on the
scaling script, see *note Invoking astscript-psf-scale-factor::.

   Once the flux factor has been computed, a final script is in charge
of placing the scaled PSF over the proper location in the image, and
subtracting it.  It is also possible to only obtain the modeled star by
the PSF. For more on the scaling and positioning script, see *note
Invoking astscript-psf-subtract::.

   As mentioned above, in the following sections, each script has its
own documentation and list of options for very detailed customization
(if necessary).  But if you are new to these scripts, before continuing,
we recommend that you do the tutorial *note Building the extended PSF::.
Just do not forget to run every command, and try to tweak its steps
based on the logic to nicely understand it.


File: gnuastro.info,  Node: Invoking astscript-psf-select-stars,  Next: Invoking astscript-psf-stamp,  Prev: Overview of the PSF scripts,  Up: PSF construction and subtraction

10.8.2 Invoking astscript-psf-select-stars
------------------------------------------

This installed script will select good star candidates for constructing
a PSF. It will consider stars within a given range of magnitudes without
nearby contaminant objects.  To do that, it allows to the user to
specify different options described here.  A complete tutorial is
available to show the operation of this script as a modular component to
extract the PSF of a dataset: *note Building the extended PSF::.  The
executable name is ‘astscript-psf-select-stars’, with the following
general template:

     $ astscript-psf-select-stars [OPTION...] FITS-file

Examples:

     ## Select all stars within 'image.fits' with magnitude in range
     ## of 6 to 10; only keeping those that are less than 0.02 degrees
     ## from other nearby stars.
     $ astscript-psf-select-stars image.fits \
                --magnituderange=6,10 --mindistdeg=0.02

   The input of this script is an image, and the output is a catalog of
stars with magnitude in the requested range of magnitudes (provided with
‘--magnituderange’).  The output catalog will also only contain stars
that are sufficiently distant (‘--mindistdeg’) from all other brighter,
and some fainter stars.  It is possible to consider different datasets
with the option ‘--dataset’ (by default, Gaia DR3 dataset is considered)
All stars that are ‘--faintmagdiff’ fainter than the faintest limit will
also be accounted for, when selecting good stars.  The
‘--magnituderange’, and ‘--mindistdeg’ are mandatory: if not specified
the code will abort.

   The output of this script is a file whose name can be specified with
the (optional) ‘--output’ option.  If not given, an automatically
generated name will be used for the output.  A full description of each
option is given below.

‘-h STR/INT’
‘--hdu=STR/INT’
     The HDU/extension of the input image to use.

‘-S STR’
‘--segmented=STR’
     Optional segmentation file obtained by *note Segment::.  It should
     have two extensions (‘CLUMPS’ and ‘OBJECTS’).  If given, a catalog
     of ‘CLUMPS’ will be computed and matched with the Gaia catalog to
     reject those objects that are too elliptical (see
     ‘--minaxisratio’).  The matching will occur on an aperture (in
     degrees) specified by ‘--matchaperturedeg’.

‘-a FLT’
‘--matchaperturedeg=FLT’
     This option determines the aperture (in degrees) for matching the
     catalog from gaia with the clumps catalog that is produced by the
     segmentation image given to ‘--segmented’.  The default value is 10
     arc-seconds.

‘-c STR’
‘--catalog=STR’
     Optional reference catalog to use for selecting stars (instead of
     querying an external catalog like Gaia).  When this option is
     given, ‘--dataset’ (described below) will be ignored and no
     internet connection will be necessary.

‘-d STR’
‘--dataset=STR’
     Optional dataset to query (see *note Query::).  It should contain
     the database and dataset entries to Query.  Its value will be
     immediately given to ‘astquery’.  By default, its value is ‘gaia
     --dataset=dr3’ (so it connects to the Gaia database and requests
     the data release 3).  For example, if you want to use VizieR's Gaia
     DR3 instead (for example due to a maintenance on ESA's Gaia
     servers), you should use ‘--dataset="vizier --dataset=gaiadr3"’.

     It is possible to specify a different dataset from which the
     catalog is downloaded.  In that case, the necessary column names
     may also differ, so you also have to set ‘--refcatra’,
     ‘--refcatdec’ and ‘--field’.  See their description for more.

‘-r STR’
‘--refcatra=STR’
     The name of the column containing the Right Ascension (RA) in the
     requested dataset (‘--dataset’).  If the user does not determine
     this option, the default value is assumed to be ‘ra’.

‘-d STR’
‘--refcatdec=STR’
     The name of the column containing the Declination (Dec) in the
     requested dataset (‘--dataset’).  If the user does not determine
     this option, the default value is assumed to be ‘dec’.

‘-f STR’
‘--field=STR’
     The name of the column containing the magnitude in the requested
     dataset (‘--dataset’).  The output will only contain stars that
     have a value in this column, between the values given to
     ‘--magnituderange’ (see below).  By default, the value of this
     option is ‘phot_g_mean_mag’ (that corresponds to the name of the
     magnitude of the G-band in the Gaia catalog).

‘-m FLT,FLT’
‘--magnituderange=FLT,FLT’
     The acceptable range of values for the column in ‘--field’.  This
     option is mandatory and no default value is assumed.

‘-p STR,STR’
‘--parallaxanderrorcolumn=STR,STR’
     With this option the user can provide the parallax and parallax
     error column names in the requested dataset.  When given, the
     output will only contain stars for which the parallax value is
     smaller than three times the parallax error.  If the user does not
     provide this option, the script will not use parallax information
     for selecting the stars.  In the case of Gaia, if you want to use
     parallax to further limit the good stars, you can pass
     ‘parallax,parallax_error’.

‘-D FLT’
‘--mindistdeg=FLT’
     Stars with nearby bright stars closer than this distance are
     rejected.  The default value is 1 arc minute.  For fainter stars
     (when constructing the center of the PSF), you should decrease the
     value.

‘-b INT’
‘--brightmag=INT’
     The brightest star magnitude to avoid (should be brighter than the
     brightest of ‘--magnituderange’).  The basic idea is this: if a
     user asks for stars with magnitude 6 to 10 and one of those stars
     is near a magnitude 3 star, that star (with a magnitude of 6 to 10)
     should be rejected because it is contaminated.  But since the
     catalog is constrained to stars of magnitudes 6-10, the star with
     magnitude 3 is not present and cannot be compared with!  Therefore,
     when considering proximity to nearby stars, it is important to use
     a larger magnitude range than the user's requested magnitude range
     for good stars.  The acceptable proximity is defined by
     ‘--mindistdeg’.

     With this option, you specify the brightest limit for the proximity
     check.  The default value is a magnitude of $-10$, so you'll rarely
     need to change or customize this option!

     The faint limit of the proximity check is specified by
     ‘--faintmagdiff’.  As the name suggests, this is a "diff" or
     relative value.  The default value is 4.  Therefore if the user
     wants to build the PSF with stars in the magnitude range of 6 to
     10, the faintest stars used for the proximity check will have a
     magnitude of 14: $10+4$.  In summary, by default, the proximity
     check will be done with stars in the magnitude range $-10$ to $14$.

‘-F INT’
‘--faintmagdiff’
     The magnitude difference of the faintest star used for proximity
     checks to the faintest limit of ‘--magnituderange’.  For more, see
     the description of ‘--brightmag’.

‘-Q FLT’
‘--minaxisratio=FLT’
     Minimum acceptable axis ratio for the selected stars.  In other
     words, only stars with axis ratio between ‘--minaxisratio’ to 1.0
     will be selected.  Default value is ‘--minaxisratio=0.9’.  Recall
     that the axis ratio is only used when you also give a segmented
     image with ‘--segmented’.

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run-time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not remove the temporary directory (see description of
     ‘--keeptmp’).  This option is useful for debugging and checking the
     outputs of internal steps.

‘-o STR’
‘--output=STR’
     The output name of the final catalog containing good stars.


File: gnuastro.info,  Node: Invoking astscript-psf-stamp,  Next: Invoking astscript-psf-unite,  Prev: Invoking astscript-psf-select-stars,  Up: PSF construction and subtraction

10.8.3 Invoking astscript-psf-stamp
-----------------------------------

This installed script will generate a stamp of fixed size, centered at
the provided coordinates (performing sub-pixel re-gridding if necessary)
and normalized at a certain normalization radius.  Optionally, it will
also mask all the other background sources.  A complete tutorial is
available to show the operation of this script as a modular component to
extract the PSF of a dataset: *note Building the extended PSF::.  The
executable name is ‘astscript-psf-stamp’, with the following general
template:

     $ astscript-psf-stamp [OPTION...] FITS-file

Examples:

     ## Make a stamp around (x,y)=(53,69) of width=151 pixels.
     ## Normalize the stamp within the radii 20 and 30 pixels.
     $ astscript-psf-stamp image.fits --mode=img \
           --center=53,69 --widthinpix=151,151 --normradii=20,30 \
           --output=stamp.fits

     ## Iterate over a catalog with positions of stars that are
     ## in the input image. Use WCS coordinates.
     $ asttable catalog.fits | while read -r ra dec mag; do \
         astscript-psf-stamp image.fits \
             --mode=wcs \
             --center=$ra,$dec \
             --normradii=20,30 \
             --widthinpix=150,150 \
             --output=stamp-"$ra"-"$dec".fits; done


   The input is an image from which the stamp of the stars are
constructed.  The output image will have the following properties:
   • A certain width (specified by ‘--widthinpix’ in pixels).
   • Centered at the coordinate specified by the option ‘--center’ (it
     can be in image/pixel or WCS coordinates, see ‘--mode’).  If no
     center is specified, then it is assumed that the object of interest
     is already in the center of the image.
   • If the given coordinate has sub-pixel elements (for example, pixel
     coordinates 1.234,4.567), the pixel grid of the output will be
     warped so your given coordinate falls in the center of the central
     pixel of the final output.  This is very important for building the
     central parts of the PSF, but not too effective for the middle or
     outer parts (to speed up the program in such cases, you can disable
     it with the ‘--nocentering’ option).
   • Normalized "normalized" by the value computed within the ring
     around the center (at a radial distance between the two radii
     specified by the option ‘--normradii’).  If no normalization ring
     is considered, the output will not be normalized.

   In the following cases, this script will produce a fully NaN-valued
stamp (of the size given to ‘--widthinpix’).  A fully NaN image can
safely be used with the stacking operators of Arithmetic (see *note
Stacking operators::) because they will be ignored.  In case you do not
want to waste storage with fully NaN images, you can compress them with
‘gzip --best output.fits’, and give the resulting ‘.fits.gz’ file to
Arithmetic.
   • The requested box (center coordinate with desired width) is not
     within the input image at all.
   • If a normalization radius is requested, and all the pixels within
     the normalization radii are NaN. Here are some scenarios that this
     can happen: 1) You have a saturated star (where the saturated
     pixels are NaN), and your normalization radius falls within the
     saturated region.  2) The star is outside the image by more than
     your larger normalization radius (so there are no pixels for doing
     normalization), but the full stamp width still overlaps part of the
     image.

The full set of options are listed below for optimal customization in
different scenarios:

‘-h STR’
‘--hdu=STR’
     The HDU/extension of the input image to use.

‘-O STR’
‘--mode=STR’
     Interpret the center position of the object (values given to
     ‘--center’) in image or WCS coordinates.  This option thus accepts
     only two values: ‘img’ or ‘wcs’.

‘-c FLT,FLT’
‘--center=FLT,FLT’
     The central position of the object.  This option is used for
     placing the center of the stamp.  This parameter is used in *note
     Crop:: to center and crop the image.  The positions along each
     dimension must be separated by a comma (<,>).  The units of the
     coordinates are read based on the value to the ‘--mode’ option, see
     the examples above.

     The given coordinate for the central value can have sub-pixel
     elements (for example, it falls on coordinate 123.4,567.8 of the
     input image pixel grid).  In such cases, after cropping, this
     script will use Gnuastro's *note Warp:: to shift (or translate) the
     pixel grid by $-0.4$ pixels along the horizontal and $1-0.8=0.2$
     pixels along the vertical.  Finally the newly added pixels (due to
     the warping) will be trimmed to have your desired coordinate
     exactly in the center of the central pixel of the output.  This is
     very important (critical!)  when you are constructing the central
     part of the PSF. But for the outer parts it is not too effective,
     so to avoid wasting time for the warping, you can simply use
     ‘--nocentering’ to disable it.

‘-d’
‘--nocentering’
     Do not do the sub-pixel centering to a new pixel grid.  See the
     description of the ‘--center’ option for more.

‘-W INT,INT’
‘--widthinpix=INT,INT’
     Size (width) of the output image stamp in pixels.  The size of the
     output image will be always an odd number of pixels.  As a
     consequence, if the user specify an even number, the final size
     will be the specified size plus 1 pixel.  This is necessary to
     place the specified coordinate (given to ‘--center’) in the center
     of the central pixel.  This is very important (and necessary) in
     the case of the centers of stars, therefore a sub-pixel translation
     will be performed internally to ensure this.

‘-n FLT,FLT’
‘--normradii=FLT,FLT’
     Minimum and maximum radius of ring to normalize the image.  This
     option takes two values, separated by a comma (<,>).  The first
     value is the inner radius, the second is the outer radius.

‘-S STR’
‘--segment=STR’
     Optional filename of a segmentation image from Segment's output
     (must contain the ‘CLUMPS’ and ‘OBJECT’ HDUs).  For more on the
     definition of "objects" and "clumps", see *note Segment::.  If
     given, Segment's output is used to mask all background sources from
     the large foreground object (a bright star):
        • Objects that are not the central object.
        • Clumps (within the central object) that are not the central
          clump.
     The result is that all objects and clumps that contaminate the
     central source are masked, while the diffuse flux of the central
     object remains.  The non masked object and clump labels are kept
     into the header of the output image.  The keywords are ‘CLABEL’ and
     ‘OLABEL’.  If no segmentation image is used, then their values are
     set to ‘none’.

‘-T FLT’
‘--snthresh=FLT’
     Mask all the pixels below the given signal-to-noise ratio (S/N)
     threshold.  This option is only valid with the ‘--segment’ option
     (it will use the ‘SKY_STD’ extension of the *note Segment output::.
     This threshold is applied prior to the possible normalization or
     centering of the stamp.  After all pixels below the given threshold
     are masked, the mask is also dilated by one level to avoid single
     pixels above the threshold (which are mainly due to noise when the
     threshold is lower).

     After applying the signal-to-noise threshold (if it is requested),
     any extra pixels that are not connected to the central target are
     also masked.  Such pixels can remain in rivers between bright
     clumps and will cause problem in the final stack, if they are not
     masked.

     This is useful for increasing the S/N of inner parts of each region
     of the finally stacked PSF. As the stars (that are to be stacked)
     become fainter, the S/N of their outer parts (at a fixed radius)
     decreases.  The stack of a higher-S/N image with a lower-S/N image
     will have an S/N that is lower than the higher one.  But we can
     still use the inner parts of those fainter stars (that have
     sufficiently high S/N).

‘-N STR’
‘--normop=STR’
     The operator for measuring the values within the ring defined by
     the option ‘--normradii’.  The operator given to this option will
     be directly passed to the radial profile script
     ‘astscript-radial-profile’, see *note Generate radial profile::.
     As a consequence, all MakeCatalog measurements (median, mean,
     sigclip-mean, sigclip-number, etc.)  can be used here.  For a full
     list of MakeCatalog's measurements, please run ‘astmkcatalog
     --help’.  The final normalization value is saved into the header of
     the output image with the keyword ‘NORMVAL’.  If no normalization
     is done, then the value is set to ‘1.0’.

‘-Q FLT’
‘--axis-ratio=FLT’
     The axis ratio of the radial profiles for computing the
     normalization value.  By default (when this option is not given),
     the radial profile will be circular (axis ratio of 1).  This
     parameter is used directly in the ‘astscript-radial-profile’
     script.

‘-p FLT’
‘--position-angle=FLT’
     The position angle (in degrees) of the profiles relative to the
     first FITS axis (horizontal when viewed in SAO DS9).  By default,
     it is ‘--position-angle=0’, which means that the semi-major axis of
     the profiles will be parallel to the first FITS axis.  This
     parameter is used directly in the ‘astscript-radial-profile’
     script.

‘-s FLT,FLT’
‘--sigmaclip=FLT,FLT’
     Sigma clipping parameters: only relevant if sigma-clipping
     operators are requested by ‘--normop’.  For more on sigma-clipping,
     see *note Sigma clipping::.

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run-time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not remove the temporary directory (see description of
     ‘--keeptmp’).  This option is useful for debugging and checking the
     outputs of internal steps.

‘-o STR’
‘--output=STR’
     Filename of stamp image.  By default the name of the stamp will be
     a combination of the input image name, the name of the script, and
     the coordinates of the center.  For example, if the input image is
     named image.fits and the center is ‘--center=33,78’, then the
     output name wil be: image_stamp_33_78.fits The main reason of
     setting this name is to have an unique name for each stamp by
     default.


File: gnuastro.info,  Node: Invoking astscript-psf-unite,  Next: Invoking astscript-psf-scale-factor,  Prev: Invoking astscript-psf-stamp,  Up: PSF construction and subtraction

10.8.4 Invoking astscript-psf-unite
-----------------------------------

This installed script will join two PSF images at a given radius.  This
operation is commonly used when merging (uniting) the inner and outer
parts of the PSF. A complete tutorial is available to show the operation
of this script as a modular component to extract the PSF of a dataset:
*note Building the extended PSF::.  The executable name is
‘astscript-psf-unite’, with the following general template:

     $ astscript-psf-unite [OPTION...] FITS-file

Examples:

     ## Multiply inner.fits by 3 and put it in the center (within a radius of
     ## 25 pixels) of outer.fits. The core goes up to a radius of 25 pixels.
     $ astscript-psf-unite outer.fits \
                --core=inner.fits --scale=3 \
                --radius=25 --output=joined.fits

     ## Same example than the above, but considering an
     ## ellipse (instead of a circle).
     $ astscript-psf-unite outer.fits \
                --core=inner.fits --scale=3 \
                --radius=25 --axis-ratio=0.5 \
                --position-angle=40 --output=joined.fits


   The junction is done by considering the input image as the outer
part.  The central part is specified by FITS image given to ‘--inner’
and it is multiplied by the factor ‘--scale’.  All pixels within
‘--radius’ (in pixels) of the center of the outer part are then replaced
with the inner image.

   The scale factor to multiply with the inner part has to be explicitly
provided (see the description of ‘--scale’ below).  Note that this
script assumes that PSF is centered in both images.  More options are
available with the goal of obtaining a good junction.  A full
description of each option is given below.

‘-h STR’
‘--hdu=STR’
     The HDU/extension of the input image to use.

‘-i STR’
‘--inner=STR’
     Filename of the inner PSF. This image is considered to be the
     central part of the PSF. It will be cropped at the radius specified
     by the option ‘--radius’, and multiplied by the factor specified by
     ‘--scale’.  After that, it will be appended to the outer part
     (input image).

‘-I STR’
‘--innerhdu=STR’
     The HDU/extension of the inner PSF (option ‘--inner’).

‘-f FLT’
‘--scale=FLT’
     Factor by which the inner part (‘--inner’) is multiplied.  This
     factor is necessary to put the two different parts of the PSF at
     the same flux level.  A convenient way of obtaining this value is
     by using the script ‘astscript-model-scale-factor’, see *note
     Invoking astscript-psf-scale-factor::.  There is also a full
     tutorial on using all the ‘astscript-psf-*’ installed scripts
     together, see *note Building the extended PSF::.  We recommend
     doing that tutorial before starting to work on your own datasets.

‘-r FLT’
‘--radius=FLT’
     Radius (in pixels) at which the junction of the images is done.
     All pixels in the outer image within this radius (from its center)
     will be replaced with the pixels of the inner image (that has been
     scaled).  By default, a circle is assumed for the shape of the
     inner region, but this can be tweaked with ‘--axis-ratio’ and
     ‘--position-angle’ (see below).

‘-Q FLT’
‘--axisratio=FLT’
     Axis ratio of ellipse to define the inner region.  By default this
     option has a value of 1.0, so all central pixels (of the outer
     image) within a circle of radius ‘--radius’ are replaced with the
     scaled inner image pixels.  With this option, you can customize the
     shape of pixels to take from the inner and outer profiles.

     For a PSF, it will usually not be necessary to change this option:
     even if the PSF is non-circular, the inner and outer parts will
     both have the same ellipticity.  So if the scale factor is chosen
     accurately, using a circle to select which pixels from the inner
     image to use in the outer image will be irrelevant.

‘-p FLT’
‘--position-angle=FLT’
     Position angle of the ellipse (in degrees) to define which central
     pixels of the outer image to replace with the scaled inner image.
     Similar to ‘--axis-ratio’ (see above).

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run-time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not remove the temporary directory (see description of
     ‘--keeptmp’).  This option is useful for debugging and checking the
     outputs of internal steps.


File: gnuastro.info,  Node: Invoking astscript-psf-scale-factor,  Next: Invoking astscript-psf-subtract,  Prev: Invoking astscript-psf-unite,  Up: PSF construction and subtraction

10.8.5 Invoking astscript-psf-scale-factor
------------------------------------------

This installed script will compute the multiplicative factor (scale)
that is necessary to match the PSF to a given star.  The match in flux
is done within a ring of pixels.  It can also be used to compute the
scale factor to multiply the inner part of the PSF with the outer part
during the creation of a PSF. A complete tutorial is available to show
the operation of this script as a modular component to extract the PSF
of a dataset: *note Building the extended PSF::.  The executable name is
‘astscript-psf-scale-factor’, with the following general template:

     $ astscript-psf-scale-factor [OPTION...] FITS-file

Examples:

     ## Compute the scale factor for the object at (x,y)=(53,69) for
     ## the PSF (psf.fits). Compute it in the ring 20-30 pixels.
     $ astscript-psf-scale-factor image.fits --mode=img \
           --center=53,69 --normradii=20,30 --psf=psf.fits

     ## Iterate over a catalog with RA,Dec positions of stars that are in
     ## the input image to compute their scale factors.
     $ asttable catalog.fits | while read -r ra dec mag; do \
         astscript-psf-scale-factor image.fits \
             --mode=wcs \
             --psf=psf.fits \
             --center=$ra,$dec --quiet \
             --normradii=20,30 > scale-"$ra"-"$dec".txt; done


   The input should be an image containing the star that you want to
match in flux with the PSF. The output will be a single number that is
printed on the command-line.  That number is the multiplicative factor
to scale the PSF image (given to ‘--psf’) to match in flux with the
given star (which is located in ‘--center’ coordinate of the input
image).  The scale factor will be calculated within the ring of pixels
specified by the option ‘--normradii’.

   All the pixels within this ring will be separated from both the PSF
and input images.  For the input image, around the selected coordinate;
while masking all other sources (see ‘--segment’).  The finally selected
pixels of the input image will then be divided by those of the PSF
image.  This gives us an image containing one scale factor per pixel.
The finally reported value is the sigma-clipped median of all the scale
factors in the finally-used pixels.  To fully understand the process on
first usage, we recommend that you run this script with ‘--keeptmp’ and
inspect the files inside of the temporary directory.

   The most common use-cases of this scale factor are:
  1. To find the factor for joining two different parts of the same PSF,
     see *note Invoking astscript-psf-unite::.
  2. When modeling a star in order to subtract it using the PSF, see
     *note Invoking astscript-psf-subtract::.

   For a full tutorial on how to use this script along with the other
‘astscript-psf-*’ scripts in Gnuastro, please see *note Building the
extended PSF::.  To allow full customizability, the following options
are available with this script.

‘-h STR’
‘--hdu=STR’
     The HDU/extension of the input image to use.

‘-p STR’
‘--psf=STR’
     Filename of the PSF image.  The PSF is assumed to be centered in
     this image.

‘-P STR’
‘--psfhdu=STR’
     The HDU/extension of the PSF image.

‘-c FLT,FLT’
‘--center=FLT,FLT’
     The central position of the object to scale with the PSF. This
     parameter is passed to Gnuastro's Crop program make a crop for
     further processing (see *note Crop::).  The positions along each
     dimension must be separated by a comma (<,>).  The units of the
     coordinates are interpreted based on the value to the ‘--mode’
     option (see below).

‘-O STR’
‘--mode=STR’
     Interpret the center position of the object (values given to
     ‘--center’) in image or WCS coordinates.  This option thus accepts
     only two values: ‘img’ or ‘wcs’.

‘-n INT,INT’
‘--normradii=INT,INT’
     Inner (inclusive) and outer (exclusive) radii (in units of pixels)
     around the central position in which the scale factor is computed.
     The option takes two values separated by a comma (<,>).  The first
     value is the inner radius, the second is the outer radius.  These
     two radii define a ring of pixels around the center that is used
     for obtaining the scale factor value.

‘-W INT,INT’
‘--widthinpix=INT,INT’
     Size (width) of the image stamp in pixels.  This is an intermediate
     product computed internally by the script.  By default, the size of
     the stamp is automatically set to be as small as possible (i.e.,
     two times the external radius of the ring specified by
     ‘--normradii’) to make the computation fast.  As a consequence,
     this option is only relevant for checking and testing that
     everything is fine (debugging; it will usually not be necessary).

‘-S STR’
‘--segment=STR’
     Optional filename of a segmentation image from Segment's output
     (must contain the ‘CLUMPS’ and ‘OBJECT’ HDUs).  For more on the
     definition of "objects" and "clumps", see *note Segment::.  If
     given, Segment's output is used to mask all background sources from
     the large foreground object (a bright star):
        • Objects that are not the central object.
        • Clumps (within the central object) that are not the central
          clump.
     The result is that all objects and clumps that contaminate the
     central source are masked, while the diffuse flux of the central
     object remains.

‘-s FLT,FLT’
‘--sigmaclip=FLT,FLT’
     Sigma clipping parameters used in the end to find the final scale
     factor from the distribution of all pixels used.  For more on
     sigma-clipping, see *note Sigma clipping::.

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run-time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not remove the temporary directory (see description of
     ‘--keeptmp’).  This option is useful for debugging and checking the
     outputs of internal steps.


File: gnuastro.info,  Node: Invoking astscript-psf-subtract,  Prev: Invoking astscript-psf-scale-factor,  Up: PSF construction and subtraction

10.8.6 Invoking astscript-psf-subtract
--------------------------------------

This installed script will put the provided PSF into a given position
within the input image (implementing sub-pixel adjustments where
necessary), and then it will subtract it.  It is aimed at modeling and
subtracting the scattered light field of an input image.  It is also
possible to obtain the modeled star with the PSF (and not make the
subtraction of it from the original image).

   A complete tutorial is available to show the operation of this script
as a modular component to extract the PSF of a dataset: *note Building
the extended PSF::.  The executable name is ‘astscript-psf-subtract’,
with the following general template:

     $ astscript-psf-subtract [OPTION...] FITS-file

Examples:

     ## Multiply the PSF (psf.fits) by 3 and subtract it from the
     ## input image (image.fits) at the pixel position (x,y)=(53,69).
     $ astscript-psf-subtract image.fits \
         --psf=psf.fits \
         --mode=img \
         --scale=3 \
         --center=53,69 \
         --output=star-53_69.fits

     ## Iterate over a catalog with positions of stars that are
     ## in the input image. Use WCS coordinates.
     $ asttable catalog.fits | while read -r ra dec mag; do
              scale=$(cat scale-"$ra"_"$dec".txt)
              astscript-psf-subtract image.fits \
                  --mode=wcs \
                  --psf=psf.fits \
                  --scale=$scale \
                  --center=$ra,$dec; done


   The input is an image from which the star is considered.  The result
is the same image but with the star subtracted (modeled by the PSF). The
modeling of the star is done with the PSF image specified with the
option ‘--psf’, and flux-scaled with the option ‘--scale’ at the
position defined by ‘--center’.  Instead of obtaining the PSF-subtracted
image, it is also possible to obtain the modeled star by the PSF. To do
that, use the option ‘--modelonly’.  With this option, the output will
be an image with the same size as the original one with the PSF situated
in the star coordinates and flux-scaled.  In this case, the region not
covered by the PSF are set to zero values.

   Note that this script works over individual objects.  As a
consequence, to generate a scattered light field of many stars, it is
necessary to make multiple calls.  A full description of each option is
given below.

‘-h STR’
‘--hdu=STR’
     The HDU/extension of the input image to use.

‘-p STR’
‘--psf=STR’
     Filename of the PSF image.  The PSF is assumed to be centered in
     this image.

‘-P STR’
‘--psfhdu=STR’
     The HDU/extension of the PSF image.

‘-O STR’
‘--mode=STR’
     Interpret the center position of the object (values given to
     ‘--center’) in image or WCS coordinates.  This option thus accepts
     only two values: ‘img’ or ‘wcs’.

‘-c FLT,FLT’
‘--center=FLT,FLT’
     The central position of the object.  This parameter is used in
     *note Crop:: to center and crop the image.  The positions along
     each dimension must be separated by a comma (<,>).  The number of
     values given to this option must be the same as the dimensions of
     the input dataset.  The units of the coordinates are read based on
     the value to the ‘--mode’ option, see above.

     If the central position does not fall in the center of a pixel in
     the input image, the PSF is resampled with sub-pixel change in the
     pixel grid before subtraction.

‘-s FLT’
‘--scale=FLT’
     Factor by which the PSF (‘--psf’) is multiplied.  This factor is
     necessary to put the PSF with the desired flux level.  A convenient
     way of obtaining this value is by using the script
     ‘astscript-scale-factor’, see *note Invoking
     astscript-psf-scale-factor::.  For a full tutorial on using the
     ‘astscript-psf-*’ scripts together, see *note Building the extended
     PSF::.

‘-t’
‘--tmpdir’
     Directory to keep temporary files during the execution of the
     script.  If the directory does not exist at run-time, this script
     will create it.  By default, upon completion of the script, this
     directory will be deleted.  However, if you would like to keep the
     intermediate files, you can use the ‘--keeptmp’ option.

‘-k’
‘--keeptmp’
     Do not remove the temporary directory (see description of
     ‘--keeptmp’).  This option is useful for debugging and checking the
     outputs of internal steps.

‘-m’
‘--modelonly’
     Do not make the subtraction of the modeled star by the PSF. This
     option is useful when the user wants to obtain the scattered light
     field given by the PSF modeled star.


File: gnuastro.info,  Node: Makefile extensions,  Next: Library,  Prev: Installed scripts,  Up: Top

11 Makefile extensions (for GNU Make)
*************************************

Make (https://en.wikipedia.org/wiki/Make_(software)) is a build
automation tool.  It can greatly help manage your analysis workflow,
even very complex projects with thousands of files and hundreds of
processing steps.  In this book, we have discussed Make previously in
the context of parallelization (see *note How to run simultaneous
operations::).  It has also been used in

   GNU Make is the most common and powerful implementation of Make, with
many unique additions to the core POSIX standard of Make.  One of those
features is the ability to add extensions using a dynamic library (that
Gnuastro provides).  For the details of this feature from GNU Make's own
manual, see its Loading dynamic objects
(https://www.gnu.org/software/make/manual/html_node/Loading-Objects.html)
section.  Through this feature, Gnuastro provides additional Make
functions that are useful in the context of data analysis.

   To use this feature, Gnuastro has to be built in shared library more.
Gnuastro's Make extensions will not work if you build Gnuastro without
shared libraries (for example, when you configure Gnuastro with
‘--disable-shared’ or ‘--debug’).

* Menu:

* Loading the Gnuastro Make functions::  How to find and load Gnuastro's Make library.
* Makefile functions of Gnuastro::  The available functions.


File: gnuastro.info,  Node: Loading the Gnuastro Make functions,  Next: Makefile functions of Gnuastro,  Prev: Makefile extensions,  Up: Makefile extensions

11.1 Loading the Gnuastro Make functions
========================================

To load Gnuastro's Make functions in your Makefile, you should use the
‘load’ command of GNU Make in your Makefile.  The load command should be
given Gnuastro's ‘libgnuastro_make.so’ dynamic library, which has been
specifically written for being called by GNU Make.  The generic command
looks like this (the ‘/PATH/TO’ part should be changed):

     load /PATH/TO/lib/libgnuastro_make.so

Here are the possible replacements of the ‘/PATH/TO’ component:
‘/usr/local’
     If you installed Gnuastro from source and did not use the
     ‘--prefix’ option at configuration time, you should use this base
     directory.
‘/usr/’
     If you installed Gnuastro through your operating system's package
     manager, it is highly likely that Gnuastro's library is here.
‘~/.local’
     If you installed Gnuastro from source, but used ‘--prefix’ to
     install Gnuastro in your home directory (as described in *note
     Installation directory::).

   If you cannot find ‘libgnuastro_make.so’ in the locations above, the
command below should give you its location.  It assumes that the
libraries are in the same base directory as the programs (which is
usually the case).

     $ which astfits | sed -e's|bin/astfits|lib/libgnuastro_make.so|'


File: gnuastro.info,  Node: Makefile functions of Gnuastro,  Prev: Loading the Gnuastro Make functions,  Up: Makefile extensions

11.2 Makefile functions of Gnuastro
===================================

All Gnuastro Make functions start with the ‘ast-’ prefix (similar to the
programs on the command-line, but with a dash).  After you have loaded
Gnuastro's shared library for Makefiles within your Makefile, you can
call these functions just like any Make function.  For instructions on
how to load Gnuastro's Make functions, see *note Loading the Gnuastro
Make functions::.

   There are two types of Make functions in Gnuastro's Make extensions:
1) Basic operations on text which more general than astronomy or
Gnuastro, see *note Text functions for Makefiles::).  2) Operations that
are directly related to astronomy (mostly FITS files) and Gnuastro, see
*note Astronomy functions for Makefiles::).

*Difference between '‘=’' or '‘:=’' for variable definition* When you
define a variable with '‘=’', its value is expanded only when used, not
when defined.  However, when you use '‘:=’', it is immediately expanded
when defined.  Therefore the location of a '‘:=’' variable in the
Makefile matters: if used before its definition, it will be empty!
Those defined by '‘=’' can be used even before they are defined!  On the
other hand, if your variable invokes functions (like ‘foreach’ or
‘wildcard’), it is better to use '‘:=’'.  Otherwise, each time the value
is used, the function will be expanded (possibly may times) and this
will reduce the speed of your pipeline.  For more, see the The two
flavors of variables
(https://www.gnu.org/software/make/manual/html_node/Flavors.html) in the
GNU Make manual.

* Menu:

* Text functions for Makefiles::  Basic text operations to supplement Make.
* Astronomy functions for Makefiles::  Astronomy/FITS related functions.


File: gnuastro.info,  Node: Text functions for Makefiles,  Next: Astronomy functions for Makefiles,  Prev: Makefile functions of Gnuastro,  Up: Makefile functions of Gnuastro

11.2.1 Text functions for Makefiles
-----------------------------------

The functions described below are generic (not limited to
astronomy/FITS) functions that operate on plain text.  You can see these
as functions that should have been implemented in GNU Make itself.  The
names of these functions start with ‘ast-text-*’ and each has a fully
working example to demonstrate its usage.

‘$(ast-text-contains STRING, TEXT)’
     Returns all white-space-separated words in ‘TEXT’ that contain the
     ‘STRING’, removing any words that _do not_ match.  For example, the
     following minimal Makefile will only print the ‘bAaz Aah’ word of
     the list.

          load /usr/local/lib/libgnuastro_make.so

          list = fooo baar bAaz uggh Aah
          all:
               echo $(ast-text-contains Aa, $(list))

     This can be thought of as Make's own ‘filter’ function, but if it
     would accept two patterns in a format like this ‘$(filter
     %Aa%,$(list))’ (for the example above).  In fact, the first
     sentence describing this function is taken from the Make manual's
     first sentence that describes the ‘filter’ function!  However,
     unfortunately Make's ‘filter’ function only accepts a single ‘%’,
     not two!

‘$(ast-text-not-contains STRING, TEXT)’
     Returns all white-space-separated words in ‘TEXT’ that _do not_
     contain the ‘STRING’, removing any words that _do not_ match.  This
     is the inverse of the ‘ast-text-contains’ function.  For example,
     the following minimal Makefile will print ‘fooo baar uggh’ word of
     the list.

          load /usr/local/lib/libgnuastro_make.so

          list = fooo baar bAaz uggh Aah
          all:
               echo $(ast-text-not-contains Aa, $(list))

‘$(ast-text-to-upper STRING)’
     Returns the input string but with all characters in UPPER-CASE. For
     example, the following minimal Makefile will print ‘FOOO BAAR UGGH’
     word of the list.

          load /usr/local/lib/libgnuastro_make.so

          list   = fOOo bAar UggH
          ulist := $(ast-text-to-upper $(list))
          all:; echo $(ulist)

‘$(ast-text-to-lower STRING)’
     Returns the input string but with all characters in lower-case.
     For example, the following minimal Makefile will print ‘fooo baar
     uggh’ word of the list.

          load /usr/local/lib/libgnuastro_make.so

          list  = fOOo bAar UggH
          list := $(ast-text-to-lower $(list))
          all:; echo $(ulist)


File: gnuastro.info,  Node: Astronomy functions for Makefiles,  Prev: Text functions for Makefiles,  Up: Makefile functions of Gnuastro

11.2.2 Astronomy functions for Makefiles
----------------------------------------

FITS files (the standard data format in astronomy) have unique features
(header keywords and HDUs) that can greatly help designing workflows in
Makefiles.  The Makefile extension functions of this section allow you
to optimally use those features within your pipelines.  Besides FITS,
when designing your workflow/pipeline with Gnuastro, there are also
special features like version checking that simplify your design.

‘$(ast-version-is STRING)’
     Returns ‘1’ if the version of the used Gnuastro is equal to
     ‘STRING’, and ‘0’ otherwise.  This is useful/critical for obtaining
     reproducible results on different systems.  It can be used in
     combination with Conditionals in Make
     (https://www.gnu.org/software/make/manual/html_node/Conditionals.html)
     to ensure the required version of Gnuastro is going to be used in
     your workflow.

     For example, in the minimal working Makefile below, we are using it
     to specify if the default (first) target (‘all’) should have any
     prerequisites (and let the workflow start), or if it should simply
     print a message (that the required version of Gnuastro isn't
     installed) and abort (without any prerequisites).

          load /usr/local/lib/libgnuastro_make.so

          gnuastro-version = 0.19
          ifeq ($(ast-version-is $(gnuastro-version)),1)
          all: paper.pdf
          else
          all:; @echo "Please use Gnuastro $(gnuastro-version)"
          endif

          result.fits: input.fits
                  astnoisechisel $< --output=$@

          paper.pdf: result.fits
                  pdflatex --halt-on-error paper.tex

‘$(ast-fits-with-keyvalue KEYNAME, KEYVALUES, HDU, FITS_FILES)’
     Will select only the FITS files (from a list of many in
     ‘FITS_FILES’, non-FITS files are ignored), where the ‘KEYNAME’
     keyword has the value(s) given in ‘KEYVALUES’.  Only the HDU given
     in the ‘HDU’ argument will be checked.  According to the FITS
     standard, the keyword name is not case sensitive, but the keyword
     value is.

     For example, if you have many FITS files in the ‘/datasets/images’
     directory, the minimal Makefile below will put those with a value
     of ‘BAR’ or ‘BAZ’ for the ‘FOO’ keyword in HDU number ‘1’ in the
     ‘selected’ Make variable.  Notice how there is no comma between
     ‘BAR’ and ‘BAZ’: you can specify any series of values.

     load /usr/local/lib/libgnuastro_make.so
     
     files := $(wildcard /datasets/images/*.fits)
     selected := $(ast-fits-with-keyvalue FOO, BAR BAZ, 1, $(files))
     
     all:
     	echo "Full:     $(words $(files)) files";
     	echo "Selected: $(words $(selected)) files"

‘$(ast-fits-unique-keyvalues KEYNAME, HDU, FITS_FILES)’
     Will return the unique values given to the given FITS keyword
     (‘KEYNAME’) in the given HDU of all the input FITS files (non-FITS
     files are ignored).  For example, after the commands below, the
     ‘keyvalues’ variable will contain the unique values given to the
     ‘FOO’ keyword in HDU number 1 of all the FITS files in
     ‘/datasets/images/*.fits’.

          files := $(wildcard /datasets/images/*.fits)
          keyvalues := $(ast-fits-unique-keyvalues FOO, 1, $(files))

     This is useful when you do not know the full range of values
     a-priori.  For example, let's assume that you are looking at a
     night's observations with a telescope and the purpose of the FITS
     image is written in the ‘OBJECT’ keyword of the image (which we can
     assume is in HDU number 1).  This keyword can have the name of the
     various science targets (for example, ‘NGC123’ and ‘M31’) and
     calibration targets (for example, ‘BIAS’ and ‘FLAT’).  The list of
     science targets is different from project to project, such that in
     one night, you can observe multiple projects.  But the calibration
     frames have unique names.  Knowing the calibration keyword values,
     you can extract the science keyword values of the night with the
     command below (feeding the output of this function to Make's
     ‘filter-out’ function).

          calib = BIAS FLAT
          files := $(wildcard /datasets/images/*.fits)
          science := $(filter-out $(calib), \
                       $(ast-fits-unique-keyvalues OBJECT, 1, $(files)))

     The ‘science’ variable will now contain the unique science targets
     that were observed in your selected FITS images.  You can use it to
     group the various exposures together in the next stages to make
     separate stacks of deep images for each science target (you can
     select FITS files based on their keyword values using the
     ‘ast-fits-with-keyvalue’ function, which is described separately in
     this section).


File: gnuastro.info,  Node: Library,  Next: Developing,  Prev: Makefile extensions,  Up: Top

12 Library
**********

Each program in Gnuastro that was discussed in the prior chapters (or
any program in general) is a collection of functions that is compiled
into one executable file which can communicate directly with the outside
world.  The outside world in this context is the operating system.  By
communication, we mean that control is directly passed to a program from
the operating system with a (possible) set of inputs and after it is
finished, the program will pass control back to the operating system.
For programs written in C and C++, the unique ‘main’ function is in
charge of this communication.

   Similar to a program, a library is also a collection of functions
that is compiled into one executable file.  However, unlike programs,
libraries do not have a ‘main’ function.  Therefore they cannot
communicate directly with the outside world.  This gives you the chance
to write your own ‘main’ function and call library functions from within
it.  After compiling your program into a binary executable, you just
have to _link_ it to the library and you are ready to run (execute) your
program.  In this way, you can use Gnuastro at a much lower-level, and
in combination with other libraries on your system, you can
significantly boost your creativity.

   This chapter starts with a basic introduction to libraries and how
you can use them in *note Review of library fundamentals::.  The
separate functions in the Gnuastro library are then introduced
(classified by context) in *note Gnuastro library::.  If you end up
routinely using a fixed set of library functions, with a well-defined
input and output, it will be much more beneficial if you define a
program for the job.  Therefore, in its *note Version controlled
source::, Gnuastro comes with the *note The TEMPLATE program:: to easily
define your own programs(s).

* Menu:

* Review of library fundamentals::  Guide on libraries and linking.
* BuildProgram::                Link and run source files with this library.
* Gnuastro library::            Description of all library functions.
* Library demo programs::       Demonstration for using libraries.


File: gnuastro.info,  Node: Review of library fundamentals,  Next: BuildProgram,  Prev: Library,  Up: Library

12.1 Review of library fundamentals
===================================

Gnuastro's libraries are written in the C programming language.  In
*note Why C::, we have thoroughly discussed the reasons behind this
choice.  C was actually created to write Unix, thus understanding the
way C works can greatly help in effectively using programs and libraries
in all Unix-like operating systems.  Therefore, in the following
subsections some important aspects of C, as it relates to libraries (and
thus programs that depend on them) on Unix are reviewed.  First we will
discuss header files in *note Headers:: and then go onto *note
Linking::.  This section finishes with *note Summary and example on
libraries::.  If you are already familiar with these concepts, please
skip this section and go directly to *note Gnuastro library::.

   In theory, a full operating system (or any software) can be written
as one function.  Such a software would not need any headers or linking
(that are discussed in the subsections below).  However, writing that
single function and maintaining it (adding new features, fixing bugs,
documentation, etc.)  would be a programmer or scientist's worst
nightmare!  Furthermore, all the hard work that went into creating it
cannot be reused in other software: every other programmer or scientist
would have to re-invent the wheel.  The ultimate purpose behind
libraries (which come with headers and have to be linked) is to address
this problem and increase modularity: "the degree to which a system's
components may be separated and recombined" (from Wikipedia).  The more
modular the source code of a program or library, the easier maintaining
it will be, and all the hard work that went into creating it can be
reused for a wider range of problems.

* Menu:

* Headers::                     Header files included in source.
* Linking::                     Linking the compiled source files into one.
* Summary and example on libraries::  A summary and example on using libraries.


File: gnuastro.info,  Node: Headers,  Next: Linking,  Prev: Review of library fundamentals,  Up: Review of library fundamentals

12.1.1 Headers
--------------

C source code is read from top to bottom in the source file, therefore
program components (for example, variables, data structures and
functions) should all be _defined_ or _declared_ closer to the top of
the source file: before they are used.  _Defining_ something in C or C++
is jargon for providing its full details.  _Declaring_ it, on the
other-hand, is jargon for only providing the minimum information needed
for the compiler to pass it temporarily and fill in the detailed
definition later.

   For a function, the _declaration_ only contains the inputs and their
data-types along with the output's type(1).  The _definition_ adds to
the declaration by including the exact details of what operations are
done to the inputs to generate the output.  As an example, take this
simple summation function:

     double
     sum(double a, double b)
     {
       return a + b;
     }
What you see above is the _definition_ of this function: it shows you
(and the compiler) exactly what it does to the two ‘double’ type inputs
and that the output also has a ‘double’ type.  Note that a function's
internal operations are rarely so simple and short, it can be
arbitrarily long and complicated.  This unreasonably short and simple
function was chosen here for ease of reading.  The declaration for this
function is:

     double
     sum(double a, double b);

You can think of a function's declaration as a building's address in the
city, and the definition as the building's complete blueprints.  When
the compiler confronts a call to a function during its processing, it
does not need to know anything about how the inputs are processed to
generate the output.  Just as the postman does not need to know the
inner structure of a building when delivering the mail.  The declaration
(address) is enough.  Therefore by _declaring_ the functions once at the
start of the source files, we do not have to worry about _defining_ them
after they are used.

   Even for a simple real-world operation (not a simple summation like
above!), you will soon need many functions (for example, some for
reading/preparing the inputs, some for the processing, and some for
preparing the output).  Although it is technically possible, managing
all the necessary functions in one file is not easy and is contrary to
the modularity principle (see *note Review of library fundamentals::),
for example, the functions for preparing the input can be usable in your
other projects with a different processing.  Therefore, as we will see
later (in *note Linking::), the functions do not necessarily need to be
defined in the source file where they are used.  As long as their
definitions are ultimately linked to the final executable, everything
will be fine.  For now, it is just important to remember that the
functions that are called within one source file must be declared within
the source file (declarations are mandatory), but not necessarily
defined there.

   In the spirit of modularity, it is common to define contextually
similar functions in one source file.  For example, in Gnuastro,
functions that calculate the median, mean and other statistical
functions are defined in ‘lib/statistics.c’, while functions that deal
directly with FITS files are defined in ‘lib/fits.c’.

   Keeping the definition of similar functions in a separate file
greatly helps their management and modularity, but this fact alone does
not make things much easier for the caller's source code: recall that
while definitions are optional, declarations are mandatory.  So if this
was all, the caller would have to manually copy and paste (_include_)
all the declarations from the various source files into the file they
are working on now.  To address this problem, programmers have adopted
the header file convention: the header file of a source code contains
all the declarations that a caller would need to be able to use any of
its functions.  For example, in Gnuastro, ‘lib/statistics.c’ (file
containing function definitions) comes with ‘lib/gnuastro/statistics.h’
(only containing function declarations).

   The discussion above was mainly focused on functions, however, there
are many more programming constructs such as preprocessor macros and
data structures.  Like functions, they also need to be known to the
compiler when it confronts a call to them.  So the header file also
contains their definitions or declarations when they are necessary for
the functions.

   Preprocessor macros (or macros for short) are replaced with their
defined value by the preprocessor before compilation.  Conventionally
they are written only in capital letters to be easily recognized.  It is
just important to understand that the compiler does not see the macros,
it sees their fixed values.  So when a header specifies macros you can
do your programming without worrying about the actual values.  The
standard C types (for example, ‘int’, or ‘float’) are very low-level and
basic.  We can collect multiple C types into a _structure_ for a
higher-level way to keep and pass-along data.  See *note Generic data
container:: for some examples of macros and data structures.

   The contents in the header need to be _include_d into the caller's
source code with a special preprocessor command: ‘#include
<path/to/header.h>’.  As the name suggests, the _preprocessor_ goes
through the source code prior to the processor (or compiler).  One of
its jobs is to include, or merge, the contents of files that are
mentioned with this directive in the source code.  Therefore the
compiler sees a single entity containing the contents of the main file
and all the included files.  This allows you to include many (sometimes
thousands of) declarations into your code with only one line.  Since the
headers are also installed with the library into your system, you do not
even need to keep a copy of them for each separate program, making
things even more convenient.

   Try opening some of the ‘.c’ files in Gnuastro's ‘lib/’ directory
with a text editor to check out the include directives at the start of
the file (after the copyright notice).  Let's take ‘lib/fits.c’ as an
example.  You will notice that Gnuastro's header files (like
‘gnuastro/fits.h’) are indeed within this directory (the ‘fits.h’ file
is in the ‘gnuastro/’ directory).  You will notice that files like
‘stdio.h’, or ‘string.h’ are not in this directory (or anywhere within
Gnuastro).

   On most systems the basic C header files (like ‘stdio.h’ and
‘string.h’ mentioned above) are located in ‘/usr/include/’(2).  Your
compiler is configured to automatically search that directory (and
possibly others), so you do not have to explicitly mention these
directories.  Go ahead, look into the ‘/usr/include’ directory and find
‘stdio.h’ for example.  When the necessary header files are not in those
specific libraries, the preprocessor can also search in places other
than the current directory.  You can specify those directories with this
preprocessor option(3):

‘-I DIR’
     "Add the directory ‘DIR’ to the list of directories to be searched
     for header files.  Directories named by '-I' are searched before
     the standard system include directories.  If the directory ‘DIR’ is
     a standard system include directory, the option is ignored to
     ensure that the default search order for system directories and the
     special treatment of system headers are not defeated..."  (quoted
     from the GNU Compiler Collection manual).  Note that the space
     between <I> and the directory is optional and commonly not used.

   If the preprocessor cannot find the included files, it will abort
with an error.  In fact a common error when building programs that
depend on a library is that the compiler does not know where a library's
header is (see *note Known issues::).  So you have to manually tell the
compiler where to look for the library's headers with the ‘-I’ option.
For a small software with one or two source files, this can be done
manually (see *note Summary and example on libraries::).  However, to
enhance modularity, Gnuastro (and most other bin/libraries) contain many
source files, so the compiler is invoked many times(4).  This makes
manual addition or modification of this option practically impossible.

   To solve this problem, in the GNU build system, there are
conventional environment variables for the various kinds of compiler
options (or flags).  These environment variables are used in every call
to the compiler (they can be empty).  The environment variable used for
the C preprocessor (or CPP) is ‘CPPFLAGS’.  By giving ‘CPPFLAGS’ a value
once, you can be sure that each call to the compiler will be affected.
See *note Known issues:: for an example of how to set this variable at
configure time.

   As described in *note Installation directory::, you can select the
top installation directory of a software using the GNU build system,
when you ‘./configure’ it.  All the separate components will be put in
their separate sub-directory under that, for example, the programs,
compiled libraries and library headers will go into ‘$prefix/bin’
(replace ‘$prefix’ with a directory), ‘$prefix/lib’, and
‘$prefix/include’ respectively.  For enhanced modularity, libraries that
contain diverse collections of functions (like GSL, WCSLIB, and
Gnuastro), put their header files in a sub-directory unique to
themselves.  For example, all Gnuastro's header files are installed in
‘$prefix/include/gnuastro’.  In your source code, you need to keep the
library's sub-directory when including the headers from such libraries,
for example, ‘#include <gnuastro/fits.h>’(5).  Not all libraries need to
follow this convention, for example, CFITSIO only has one header
(‘fitsio.h’) which is directly installed in ‘$prefix/include’.

   ---------- Footnotes ----------

   (1) Recall that in C, functions only have one output.

   (2) The ‘include/’ directory name is taken from the pre-processor's
‘#include’ directive, which is also the motivation behind the 'I' in the
‘-I’ option to the pre-processor.

   (3) Try running Gnuastro's ‘make’ and find the directories given to
the compiler with the ‘-I’ option.

   (4) Nearly every command you see being executed after running ‘make’
is one call to the compiler.

   (5) the top ‘$prefix/include’ directory is usually known to the
compiler


File: gnuastro.info,  Node: Linking,  Next: Summary and example on libraries,  Prev: Headers,  Up: Review of library fundamentals

12.1.2 Linking
--------------

To enhance modularity, similar functions are defined in one source file
(with a ‘.c’ suffix, see *note Headers:: for more).  After running
‘make’, each human-readable, ‘.c’ file is translated (or compiled) into
a computer-readable "object" file (ending with ‘.o’).  Note that object
files are also created when building programs, they are not particular
to libraries.  Try opening Gnuastro's ‘lib/’ and ‘bin/progname/’
directories after running ‘make’ to see these object files(1).
Afterwards, the object files are _linked_ together to create an
executable program or a library.

   The object files contain the full definition of the functions in the
respective ‘.c’ file along with a list of any other function (or
generally "symbol") that is referenced there.  To get a list of those
functions you can use the ‘nm’ program which is part of GNU Binutils.
For example, from the top Gnuastro directory, run:

     $ nm bin/arithmetic/arithmetic.o

This will print a list of all the functions (more generally, 'symbols')
that were called within ‘bin/arithmetic/arithmetic.c’ along with some
further information (for example, a ‘T’ in the second column shows that
this function is actually defined here, ‘U’ says that it is undefined
here).  Try opening the ‘.c’ file to check some of these functions for
yourself.  Run ‘info nm’ for more information.

   To recap, the _compiler_ created the separate object files mentioned
above for each ‘.c’ file.  The _linker_ will then combine all the
symbols of the various object files (and libraries) into one program or
library.  In the case of Arithmetic (a program) the contents of the
object files in ‘bin/arithmetic/’ are copied (and re-ordered) into one
final executable file which we can run from the operating system.

   There are two ways to _link_ all the necessary symbols: static and
dynamic/shared.  When the symbols (computer-readable function
definitions in most cases) are copied into the output, it is called
_static_ linking.  When the symbols are kept in their original file and
only a reference to them is kept in the executable, it is called
_dynamic_, or _shared_ linking.

   Let's have a closer look at the executable to understand this better:
we will assume you have built Gnuastro without any customization and
installed Gnuastro into the default ‘/usr/local/’ directory (see *note
Installation directory::).  If you tried the ‘nm’ command on one of
Arithmetic's object files above, then with the command below you can
confirm that all the functions that were defined in the object file
above (had a ‘T’ in the second column) are also defined in the
‘astarithmetic’ executable:

     $ nm /usr/local/bin/astarithmetic

These symbols/function have been statically linked (copied) in the final
executable.  But you will notice that there are still many undefined
symbols in the executable (those with a ‘U’ in the second column).  One
class of such functions are Gnuastro's own library functions that start
with '‘gal_’':

     $ nm /usr/local/bin/astarithmetic | grep gal_

   These undefined symbols (functions) are present in another file and
will be linked to the Arithmetic program every time you run it.
Therefore they are known as dynamically _linked_ libraries (2).  As we
saw above, static linking is done when the executable is being built.
However, when a program is dynamically linked to a library, at
build-time, the library's symbols are only checked with the available
libraries: they are not actually copied into the program's executable.
Every time you run the program, the (dynamic) linker will be activated
and will try to link the program to the installed library before the
program starts.

   If you want all the libraries to be statically linked to the
executables, you have to tell Libtool (which Gnuastro uses for the
linking) to disable shared libraries at configure time(3):

     $ configure --disable-shared

Try configuring Gnuastro with the command above, then build and install
it (as described in *note Quick start::).  Afterwards, check the ‘gal_’
symbols in the installed Arithmetic executable like before.  You will
see that they are actually copied this time (have a ‘T’ in the second
column).  If the second column does not convince you, look at the
executable file size with the following command:

     $ ls -lh /usr/local/bin/astarithmetic

It should be around 4.2 Megabytes with this static linking.  If you
configure and build Gnuastro again with shared libraries enabled (which
is the default), you will notice that it is roughly 100 Kilobytes!

   This huge difference would have been very significant in the old
days, but with the roughly Terabyte storage drives commonly in use
today, it is negligible.  Fortunately, output file size is not the only
benefit of dynamic linking: since it links to the libraries at run-time
(rather than build-time), you do not have to rebuild a higher-level
program or library when an update comes for one of the lower-level
libraries it depends on.  You just install the new low-level library and
it will automatically be used/linked next time in the programs that use
it.  To be fair, this also creates a few complications(4):

   • Reproducibility: Even though your high-level tool has the same
     version as before, with the updated library, you might not get the
     same results.
   • Broken links: if some functions have been changed or removed in the
     updated library, then the linker will abort with an error at
     run-time.  Therefore you need to rebuild your higher-level program
     or library.

   To see a list of all the shared libraries that are needed for a
program or a shared library to run, you can use GNU C library's ‘ldd’(5)
program, for example:

     $ ldd /usr/local/bin/astarithmetic

   Library file names (in their installation directory) start with a
‘lib’ and their ending (suffix) shows if they are static (‘.a’) or
dynamic (‘.so’), as described below.  The name of the library is in the
middle of these two, for example, ‘libgsl.a’ or ‘libgnuastro.a’ (GSL and
Gnuastro's static libraries), and ‘libgsl.so.23.0.0’ or
‘libgnuastro.so.4.0.0’ (GSL and Gnuastro's shared library, the numbers
may be different).

   • A static library is known as an archive file and has the ‘.a’
     suffix.  A static library is not an executable file.

   • A shared library ends with the ‘.so.X.Y.Z’ suffix and is
     executable.  The three numbers in the suffix, describe the version
     of the shared library.  Shared library versions are defined to
     allow multiple versions of a shared library simultaneously on a
     system and to help detect possible updates in the library and
     programs that depend on it by the linker.

     It is very important to mention that this version number is
     different from the software version number (see *note Version
     numbering::), so do not confuse the two.  See the "Library
     interface versions" chapter of GNU Libtool for more.

     For each shared library, we also have two symbolic links ending
     with ‘.so.X’ and ‘.so’.  They are automatically set by the
     installer, but you can change them (point them to another version
     of the library) when you have multiple versions of a library on
     your system.

   Libraries that are built with GNU Libtool (including Gnuastro and its
dependencies), build both static and dynamic libraries by default and
install them in ‘prefix/lib/’ directory (for more on ‘prefix’, see *note
Installation directory::).  In this way, programs depending on the
libraries can link with them however they prefer.  See the contents of
‘/usr/local/lib’ with the command below to see both the static and
shared libraries available there, along with their executable nature and
the symbolic links:

     $ ls -l /usr/local/lib/

   To link with a library, the linker needs to know where to find the
library.  _At compilation time_, these locations can be passed to the
linker with two separate options (see *note Summary and example on
libraries:: for an example) as described below.  You can see these
options and their usage in practice while building Gnuastro (after
running ‘make’):

‘-L DIR’
     Will tell the linker to look into ‘DIR’ for the libraries.  For
     example, ‘-L/usr/local/lib’, or ‘-L/home/yourname/.local/lib’.  You
     can make multiple calls to this option, so the linker looks into
     several directories at compilation time.  Note that the space
     between <L> and the directory is optional and commonly ignored
     (written as ‘-LDIR’).

‘-lLIBRARY’
     Specify the unique library identifier/name (not containing
     directory or shared/dynamic nature) to be linked with the
     executable.  As discussed above, library file names have fixed
     parts which must not be given to this option.  So ‘-lgsl’ will
     guide the linker to either look for ‘libgsl.a’ or ‘libgsl.so’
     (depending on the type of linking it is suppose to do).  You can
     link many libraries by repeated calls to this option.

     *Very important: * The place of this option on the compiler's
     command matters.  This is often a source of confusion for
     beginners, so let's assume you have asked the linker to link with
     library A using this option.  As soon as the linker confronts this
     option, it looks into the list of the undefined symbols it has
     found until that point and does a search in library A for any of
     those symbols.  If any pending undefined symbol is found in library
     A, it is used.  After the search in undefined symbols is complete,
     the contents of library A are completely discarded from the
     linker's memory.  Therefore, if a later object file or library uses
     an unlinked symbol in library A, the linker will abort after it has
     finished its search in all the input libraries or object files.

     As an example, Gnuastro's ‘gal_fits_img_read’ function depends on
     the ‘fits_read_pix’ function of CFITSIO (specified with
     ‘-lcfitsio’, which in turn depends on the cURL library, called with
     ‘-lcurl’).  So the proper way to link something that uses this
     function is ‘-lgnuastro -lcfitsio -lcurl’.  If instead, you give:
     ‘-lcfitsio -lgnuastro’ the linker will complain and abort.  To
     avoid such linking complexities when using Gnuastro's library, we
     recommend using *note BuildProgram::.

   If you have compiled and linked your program with a dynamic library,
then the dynamic linker also needs to know the location of the libraries
after building the program: _every time_ the program is run afterwards.
Therefore, it may happen that you do not get any errors when
compiling/linking a program, but are unable to run your program because
of a failure to find a library.  This happens because the dynamic linker
has not found the dynamic library _at run time_.

   To find the dynamic libraries at run-time, the linker looks into the
paths, or directories, in the ‘LD_LIBRARY_PATH’ environment variable.
For a discussion on environment variables, especially search paths like
‘LD_LIBRARY_PATH’, and how you can add new directories to them, see
*note Installation directory::.

   ---------- Footnotes ----------

   (1) Gnuastro uses GNU Libtool for portable library creation.  Libtool
will also make a ‘.lo’ file for each ‘.c’ file when building libraries
(‘.lo’ files are human-readable).

   (2) Do not confuse dynamically _linked_ libraries with dynamically
_loaded_ libraries.  The former (that is discussed here) are only loaded
once at the program startup.  However, the latter can be loaded anytime
during the program's execution, they are also known as plugins.

   (3) Libtool is very common and is commonly used.  Therefore, you can
use this option to configure on most programs using the GNU build system
if you want static linking.

   (4) Both of these can be avoided by joining the mailing lists of the
lower-level libraries and checking the changes in newer versions before
installing them.  Updates that result in such behaviors are generally
heavily emphasized in the release notes.

   (5) If your operating system is not using the GNU C library, you
might need another tool.


File: gnuastro.info,  Node: Summary and example on libraries,  Prev: Linking,  Up: Review of library fundamentals

12.1.3 Summary and example on libraries
---------------------------------------

After the mostly abstract discussions of *note Headers:: and *note
Linking::, we will give a small tutorial here.  But before that, let's
recall the general steps of how your source code is prepared, compiled
and linked to the libraries it depends on so you can run it:

  1. The *preprocessor* includes the header (‘.h’) files into the
     function definition (‘.c’) files, expands preprocessor macros.
     Generally the preprocessor prepares the human-readable source for
     compilation (reviewed in *note Headers::).

  2. The *compiler* will translate (compile) the human-readable contents
     of each source (merged ‘.c’ and the ‘.h’ files, or generally the
     output of the preprocessor) into the computer-readable code of ‘.o’
     files.

  3. The *linker* will link the called function definitions from various
     compiled files to create one unified object.  When the unified
     product has a ‘main’ function, this function is the product's only
     entry point, enabling the operating system or user to directly
     interact with it, so the product is a program.  When the product
     does not have a ‘main’ function, the linker's product is a library
     and it is exported functions can be linked to other executables (it
     has many entry points).

   The GNU Compiler Collection (or GCC for short) will do all three
steps.  So as a first example, from Gnuastro's source, go to
‘tests/lib/’.  This directory contains the library tests, you can use
these as some simple tutorials.  For this demonstration, we will compile
and run the ‘arraymanip.c’.  This small program will call Gnuastro
library for some simple operations on an array (open it and have a
look).  To compile this program, run this command inside the directory
containing it.

     $ gcc arraymanip.c -lgnuastro -lm -o arraymanip

The two ‘-lgnuastro’ and ‘-lm’ options (in this order) tell GCC to first
link with the Gnuastro library and then with C's math library.  The ‘-o’
option is used to specify the name of the output executable, without it
the output file name will be ‘a.out’ (on most OSs), independent of your
input file name(s).

   If your top Gnuastro installation directory (let's call it ‘$prefix’,
see *note Installation directory::) is not recognized by GCC, you will
get preprocessor errors for unknown header files.  Once you fix it, you
will get linker errors for undefined functions.  To fix both, you should
run GCC as follows: additionally telling it which directories it can
find Gnuastro's headers and compiled library (see *note Headers:: and
*note Linking::):

     $ gcc -I$prefix/include -L$prefix/lib arraymanip.c -lgnuastro -lm     \
           -o arraymanip

This single command has done all the preprocessor, compilation and
linker operations.  Therefore no intermediate files (object files in
particular) were created, only a single output executable was created.
You are now ready to run the program with:

     $ ./arraymanip

   The Gnuastro functions called by this program only needed to be
linked with the C math library.  But if your program needs WCS
coordinate transformations, needs to read a FITS file, needs special
math operations (which include its linear algebra operations), or you
want it to run on multiple CPU threads, you also need to add these
libraries in the call to GCC: ‘-lgnuastro -lwcs -lcfitsio -lgsl
-lgslcblas -pthread -lm’.  In *note Gnuastro library::, where each
function is documented, it is mentioned which libraries (if any) must
also be linked when you call a function.  If you feel all these linkings
can be confusing, please consider Gnuastro's *note BuildProgram::
program.


File: gnuastro.info,  Node: BuildProgram,  Next: Gnuastro library,  Prev: Review of library fundamentals,  Up: Library

12.2 BuildProgram
=================

The number and order of libraries that are necessary for linking a
program with Gnuastro library might be too confusing when you need to
compile a small program for one particular job (with one source file).
BuildProgram will use the information gathered during configuring
Gnuastro and link with all the appropriate libraries on your system.
This will allow you to easily compile, link and run programs that use
Gnuastro's library with one simple command and not worry about which
libraries to link to, or the linking order.

   BuildProgram uses GNU Libtool to find the necessary libraries to link
against (GNU Libtool is the same program that builds all of Gnuastro's
libraries and programs when you run ‘make’).  So in the future, if
Gnuastro's prerequisite libraries change or other libraries are added,
you do not have to worry, you can just run BuildProgram and internal
linking will be done correctly.

*BuildProgram requires GNU Libtool:* BuildProgram depends on GNU
Libtool, other implementations do not have some necessary features.  If
GNU Libtool is not available at Gnuastro's configure time, you will get
a notice at the end of the configuration step and BuildProgram will not
be built or installed.  Please see *note Optional dependencies:: for
more information.

* Menu:

* Invoking astbuildprog::       Options and examples for using this program.


File: gnuastro.info,  Node: Invoking astbuildprog,  Prev: BuildProgram,  Up: BuildProgram

12.2.1 Invoking BuildProgram
----------------------------

BuildProgram will compile and link a C source program with Gnuastro's
library and all its dependencies, greatly facilitating the compilation
and running of small programs that use Gnuastro's library.  The
executable name is ‘astbuildprog’ with the following general template:

     $ astbuildprog [OPTION...] C_SOURCE_FILE

One line examples:

     ## Compile, link and run `myprogram.c':
     $ astbuildprog myprogram.c

     ## Similar to previous, but with optimization and compiler warnings:
     $ astbuildprog -Wall -O2 myprogram.c

     ## Compile and link `myprogram.c', then run it with `image.fits'
     ## as its argument:
     $ astbuildprog myprogram.c image.fits

     ## Also look in other directories for headers and linking:
     $ astbuildprog -Lother -Iother/dir myprogram.c

     ## Just build (compile and link) `myprogram.c', do not run it:
     $ astbuildprog --onlybuild myprogram.c

   If BuildProgram is to run, it needs a C programming language source
file as input.  By default it will compile and link the given source
into a final executable program and run it.  The built executable name
can be set with the optional ‘--output’ option.  When no output name is
set, BuildProgram will use Gnuastro's *note Automatic output:: system to
remove the suffix of the input source file (usually ‘.c’) and use the
resulting name as the built program name.

   For the full list of options that BuildProgram shares with other
Gnuastro programs, see *note Common options::.  You may also use
Gnuastro's *note Configuration files:: to specify other
libraries/headers to use for special directories and not have to type
them in every time.

   The C compiler can be chosen with the ‘--cc’ option, or environment
variables, please see the description of ‘--cc’ for more.  The two
common ‘LDFLAGS’ and ‘CPPFLAGS’ environment variables are also checked
and used in the build by default.  Note that they are placed after the
values to the corresponding options ‘--includedir’ and ‘--linkdir’.
Therefore BuildProgram's own options take precedence.  Using environment
variables can be disabled with the ‘--noenv’ option.  Just note that
BuildProgram also keeps the important flags in these environment
variables in its configuration file.  Therefore, in many cases, even
though you may needed them to build Gnuastro, you will not need them in
BuildProgram.

   The first argument is considered to be the C source file that must be
compiled and linked.  Any other arguments (non-option tokens on the
command-line) will be passed onto the program when BuildProgram wants to
run it.  Recall that by default BuildProgram will run the program after
building it.  This behavior can be disabled with the ‘--onlybuild’
option.

   When the ‘--quiet’ option (see *note Operating mode options::) is not
called, BuildPrograms will print the compilation and running commands.
Once your program grows and you break it up into multiple files (which
are much more easily managed with Make), you can use the linking flags
of the non-quiet output in your ‘Makefile’.

‘-c STR’
‘--cc=STR’
     C compiler to use for the compilation, if not given environment
     variables will be used as described in the next paragraph.  If the
     compiler is in your system's search path, you can simply give its
     name, for example, ‘--cc=gcc’.  If it is not in your system's
     search path, you can give its full path, for example,
     ‘--cc=/path/to/your/custom/cc’.

     If this option has no value after parsing the command-line and all
     configuration files (see *note Configuration file precedence::),
     then BuildProgram will look into the following environment
     variables in the given order ‘CC’ and ‘GCC’.  If they are also not
     defined, BuildProgram will ultimately default to the ‘gcc’ command
     which is present in many systems (sometimes as a link to other
     compilers).

‘-I STR’
‘--includedir=STR’
     Directory to search for files that you ‘#include’ in your C
     program.  Note that headers relating to Gnuastro and its
     dependencies do not need this option.  This is only necessary if
     you want to use other headers.  It may be called multiple times and
     order matters.  This directory will be searched before those of
     Gnuastro's build and also the system search directories.  See *note
     Headers:: for a thorough introduction.

     From the GNU C preprocessor manual: "Add the directory ‘STR’ to the
     list of directories to be searched for header files.  Directories
     named by ‘-I’ are searched before the standard system include
     directories.  If the directory ‘STR’ is a standard system include
     directory, the option is ignored to ensure that the default search
     order for system directories and the special treatment of system
     headers are not defeated".

‘-L STR’
‘--linkdir=STR’
     Directory to search for compiled libraries to link the program
     with.  Note that all the directories that Gnuastro was built with
     will already be used by BuildProgram (GNU Libtool).  This option is
     only necessary if your libraries are in other directories.
     Multiple calls to this option are possible and order matters.  This
     directory will be searched before those of Gnuastro's build and
     also the system search directories.  See *note Linking:: for a
     thorough introduction.

‘-l STR’
‘--linklib=STR’
     Library to link with your program.  Note that all the libraries
     that Gnuastro was built with will already be linked by BuildProgram
     (GNU Libtool).  This option is only necessary if you want to link
     with other directories.  Multiple calls to this option are possible
     and order matters.  This library will be linked before Gnuastro's
     library or its dependencies.  See *note Linking:: for a thorough
     introduction.

‘-O INT/STR’
‘--optimize=INT/STR’
     Compiler optimization level: 0 (for no optimization, good
     debugging), 1, 2, 3 (for the highest level of optimizations).  From
     the GNU Compiler Collection (GCC) manual: "Without any optimization
     option, the compiler's goal is to reduce the cost of compilation
     and to make debugging produce the expected results.  Statements are
     independent: if you stop the program with a break point between
     statements, you can then assign a new value to any variable or
     change the program counter to any other statement in the function
     and get exactly the results you expect from the source code.
     Turning on optimization flags makes the compiler attempt to improve
     the performance and/or code size at the expense of compilation time
     and possibly the ability to debug the program."  Please see your
     compiler's manual for the full list of acceptable values to this
     option.

‘-g’
‘--debug’
     Emit extra information in the compiled binary for use by a
     debugger.  When calling this option, it is best to explicitly
     disable optimization with ‘-O0’.  To combine both options you can
     run ‘-gO0’ (see *note Options:: for how short options can be merged
     into one).

‘-W STR’
‘--warning=STR’
     Print compiler warnings on command-line during compilation.
     "Warnings are diagnostic messages that report constructions that
     are not inherently erroneous but that are risky or suggest there
     may have been an error."  (from the GCC manual).  It is always
     recommended to compile your programs with warnings enabled.

     All compiler warning options that start with ‘W’ are usable by this
     option in BuildProgram also, see your compiler's manual for the
     full list.  Some of the most common values to this option are:
     ‘pedantic’ (Warnings related to standard C) and ‘all’ (all issues
     the compiler confronts).

‘-t’
‘--tag=STR’
     The language configuration information.  Libtool can build objects
     and libraries in many languages.  In many cases, it can identify
     the language automatically, but when it does not you can use this
     option to explicitly notify Libtool of the language.  The
     acceptable values are: ‘CC’ for C, ‘CXX’ for C++, ‘GCJ’ for Java,
     ‘F77’ for Fortran 77, ‘FC’ for Fortran, ‘GO’ for Go and ‘RC’ for
     Windows Resource.  Note that the Gnuastro library is not yet fully
     compatible with all these languages.

‘-b’
‘--onlybuild’
     Only build the program, do not run it.  By default, the built
     program is immediately run afterwards.

‘-d’
‘--deletecompiled’
     Delete the compiled binary file after running it.  This option is
     only relevant when the compiled program is run after being built.
     In other words, it is only relevant when ‘--onlybuild’ is not
     called.  It can be useful when you are busy testing a program or
     just want a fast result and the actual binary/compiled file is not
     of later use.

‘-a STR’
‘--la=STR’
     Use the given ‘.la’ file (Libtool control file) instead of the one
     that was produced from Gnuastro's configuration results.  The
     Libtool control file keeps all the necessary information for
     building and linking a program with a library built by Libtool.
     The default ‘prefix/lib/libgnuastro.la’ keeps all the information
     necessary to build a program using the Gnuastro library gathered
     during configure time (see *note Installation directory:: for
     prefix).  This option is useful when you prefer to use another
     Libtool control file.

‘-e’
‘--noenv’
     Do not use environment variables in the build, just use the values
     given to the options.  As described above, environment variables
     like ‘CC’, ‘GCC’, ‘LDFLAGS’, ‘CPPFLAGS’ will be read by default and
     used in the build if they have been defined.


File: gnuastro.info,  Node: Gnuastro library,  Next: Library demo programs,  Prev: BuildProgram,  Up: Library

12.3 Gnuastro library
=====================

Gnuastro library's programming constructs (function declarations,
macros, data structures, or global variables) are classified by context
into multiple header files (see *note Headers::)(1).  In this section,
the functions in each header will be discussed under a separate
sub-section, which includes the name of the header.  Assuming a function
declaration is in ‘headername.h’, you can include its declaration in
your source code with:

     # include <gnuastro/headername.h>

The names of all constructs in ‘headername.h’ are prefixed with
‘gal_headername_’ (or ‘GAL_HEADERNAME_’ for macros).  The ‘gal_’ prefix
stands for _G_NU _A_stronomy _L_ibrary.

   Gnuastro library functions are compiled into a single file which can
be linked on the command-line with the ‘-lgnuastro’ option.  See *note
Linking:: and *note Summary and example on libraries:: for an
introduction on linking and some fully working examples of the
libraries.

   Gnuastro's library is a high-level library which depends on lower
level libraries for some operations (see *note Dependencies::).
Therefore if at least one of Gnuastro's functions in your program use
functions from the dependencies, you will also need to link those
dependencies after linking with Gnuastro.  See *note BuildProgram:: for
a convenient way to deal with the dependencies.  BuildProgram will take
care of the libraries to link with your program (which uses the Gnuastro
library), and can even run the built program afterwards.  Therefore it
allows you to conveniently focus on your exciting science/research when
using Gnuastro's libraries.

*Libraries are still under heavy development: * Gnuastro was initially
created to be a collection of command-line programs.  However, as the
programs and their the shared functions grew, internal (not installed)
libraries were added.  Since the 0.2 release, the libraries are
install-able.  Hence the libraries are currently under heavy development
and will significantly evolve between releases and will become more
mature and stable in due time.  It will stabilize with the removal of
this notice.  Check the ‘NEWS’ file for interface changes.  If you use
the Info version of this manual (see *note Info::), you do not have to
worry: the documentation will correspond to your installed version.

* Menu:

* Configuration information::   General information about library config.
* Multithreaded programming::   Tools for easy multi-threaded operations.
* Library data types::          Definitions and functions for types.
* Pointers::                    Wrappers for easy working with pointers.**
* Library blank values::        Blank values and functions to deal with them.
* Library data container::      General data container in Gnuastro.
* Dimensions::                  Dealing with coordinates and dimensions.
* Linked lists::                Various types of linked lists.
* Array input output::          Reading and writing images or cubes.
* Table input output::          Reading and writing table columns.
* FITS files::                  Working with FITS data.
* File input output::           Reading and writing to various file formats.
* World Coordinate System::     Dealing with the world coordinate system.
* Arithmetic on datasets::      Arithmetic operations on a dataset.
* Tessellation library::        Functions for working on tiles.
* Bounding box::                Finding the bounding box.
* Polygons::                    Working with the vertices of a polygon.
* Qsort functions::             Helper functions for Qsort.
* K-d tree::                    Space partitioning in K dimensions.
* Permutations::                Re-order (or permute) the values in a dataset.
* Matching::                    Matching catalogs based on position.
* Statistical operations::      Functions for basic statistics.
* Fitting functions::           Fit independent and measured variables.
* Binary datasets::             Datasets that can only have values of 0 or 1.
* Labeled datasets::            Working with Segmented/labeled datasets.
* Convolution functions::       Library functions to do convolution.
* Pooling functions::           Reduce size of input by statistical methods.
* Interpolation::               Interpolate (over blank values possibly).
* Warp library::                Warp pixel grid to a new one.
* Color functions::             Definitions and operations related to colors.
* Git wrappers::                Wrappers for functions in libgit2.
* Python interface::            Functions to help in writing Python wrappers.
* Unit conversion library::     Converting between recognized units.
* Spectral lines library::      Functions for operating on Spectral lines.
* Cosmology library::           Cosmological calculations.
* SAO DS9 library::             Take inputs from files generated by SAO DS9.

   ---------- Footnotes ----------

   (1) Within Gnuastro's source, all installed ‘.h’ files in
‘lib/gnuastro/’ are accompanied by a ‘.c’ file in ‘/lib/’.


File: gnuastro.info,  Node: Configuration information,  Next: Multithreaded programming,  Prev: Gnuastro library,  Up: Gnuastro library

12.3.1 Configuration information (‘config.h’)
---------------------------------------------

The ‘gnuastro/config.h’ header contains information about the full
Gnuastro installation on your system.  Gnuastro developers should note
that this is the only header that is not available within Gnuastro, it
is only available to a Gnuastro library user _after_ installation.
Within Gnuastro, ‘config.h’ (which is included in every Gnuastro ‘.c’
file, see *note Coding conventions::) has more than enough information
about the overall Gnuastro installation.

 -- Macro: GAL_CONFIG_VERSION
     This macro can be used as a string literal(1) containing the
     version of Gnuastro that is being used.  See *note Version
     numbering:: for the version formats.  For example:

          printf("Gnuastro version: %s\n", GAL_CONFIG_VERSION);

     or

          char *gnuastro_version=GAL_CONFIG_VERSION;

 -- Macro: GAL_CONFIG_HAVE_GSL_INTERP_STEFFEN
     GNU Scientific Library (GSL) is a mandatory dependency of Gnuastro
     (see *note GNU Scientific Library::).  The Steffen interpolation
     function that can be used in Gnuastro was introduced in GSL version
     2.0 (released in October 2015).  This macro will have a value of
     ‘1’ if the host GSL contains this feature at configure time, and
     ‘0’ otherwise.

 -- Macro: GAL_CONFIG_HAVE_FITS_IS_REENTRANT
     This macro will have a value of 1 when the CFITSIO of the host
     system has the ‘fits_is_reentrant’ function (available from CFITSIO
     version 3.30).  This function is used to see if CFITSIO was
     configured to read a FITS file simultaneously on different threads.

 -- Macro: GAL_CONFIG_HAVE_WCSLIB_VERSION
     WCSLIB is the reference library for world coordinate system
     transformation (see *note WCSLIB:: and *note World Coordinate
     System::).  However, only more recent versions of WCSLIB also
     provide its version number.  If the WCSLIB that is installed on the
     system provides its version (through the possibly existing
     ‘wcslib_version’ function), this macro will have a value of one,
     otherwise it will have a value of zero.

 -- Macro: GAL_CONFIG_HAVE_WCSLIB_DIS_H
     This macro has a value of 1 if the host's WCSLIB has the
     ‘wcslib/dis.h’ header for distortion-related operations.

 -- Macro: GAL_CONFIG_HAVE_WCSLIB_MJDREF
     This macro has a value of 1 if the host's WCSLIB reads and stores
     the ‘MJDREF’ FITS header keyword as part of its core ‘wcsprm’
     structure.

 -- Macro: GAL_CONFIG_HAVE_WCSLIB_OBSFIX
     This macro has a value of 1 if the host's WCSLIB supports the
     ‘OBSFIX’ feature (used by ‘wcsfix’ function to parse the input WCS
     for known errors).

 -- Macro: GAL_CONFIG_HAVE_PTHREAD_BARRIER
     The POSIX threads standard define barriers as an optional
     requirement.  Therefore, some operating systems choose to not
     include it.  As one of the ‘./configure’ step checks, Gnuastro we
     check if your system has this POSIX thread barriers.  If so, this
     macro will have a value of ‘1’, otherwise it will have a value of
     ‘0’.  see *note Implementation of pthread_barrier:: for more.

 -- Macro: GAL_CONFIG_SIZEOF_LONG
 -- Macro: GAL_CONFIG_SIZEOF_SIZE_T
     The size of (number of bytes in) the system's ‘long’ and ‘size_t’
     types.  Their values are commonly either 4 or 8 for 32-bit and
     64-bit systems.  You can also get this value with the expression
     '‘sizeof size_t’' for example, without having to include this
     header.

 -- Macro: GAL_CONFIG_HAVE_LIBGIT2
     Libgit2 is an optional dependency of Gnuastro (see *note Optional
     dependencies::).  When it is installed and detected at configure
     time, this macro will have a value of ‘1’ (one).  Otherwise, it
     will have a value of ‘0’ (zero).  Gnuastro also comes with some
     wrappers to make it easier to use libgit2 (see *note Git
     wrappers::).

 -- Macro: GAL_CONFIG_HAVE_PYTHON
     Gnuastro can optionally provide a set of basic functions to
     facilitate wrapper libraries in Python (see *note Python
     interface::).  If a version of Python 3.X was found on the host
     system that has the necessary Numpy headers, this macro will be
     given a value of ‘1’.  Otherwise, it will be given a value of ‘0’
     and the the Python interface functions won't be available in the
     host's Gnuastro library.

 -- Macro: GAL_CONFIG_HAVE_GNUMAKE_H
     Gnuastro provides a set of GNU Make extension functions (see *note
     Makefile extensions::).  In order to use those, the host should
     have ‘gnumake.h’ in its include paths.  This check is done at
     Gnuastro's configuration time.  If it was found, this macro is
     given a value of ‘1’, otherwise, it will have a value of ‘0’.

   ---------- Footnotes ----------

   (1) <https://en.wikipedia.org/wiki/String_literal>


File: gnuastro.info,  Node: Multithreaded programming,  Next: Library data types,  Prev: Configuration information,  Up: Gnuastro library

12.3.2 Multithreaded programming (‘threads.h’)
----------------------------------------------

In recent years, newer CPUs do not have significantly higher frequencies
any more.  However, CPUs are being manufactured with more cores,
enabling more than one operation (thread) at each instant.  This can be
very useful to speed up many aspects of processing and in particular
image processing.

   Most of the programs in Gnuastro utilize multi-threaded programming
for the CPU intensive processing steps.  This can potentially lead to a
significant decrease in the running time of a program, see *note A note
on threads::.  In terms of reading the code, you do not need to know
anything about multi-threaded programming.  You can simply follow the
case where only one thread is to be used.  In these cases, threads are
not used and can be completely ignored.

   When the C language was defined (the K&R's book was written), using
threads was not common, so C's threading capabilities are not introduced
there.  Gnuastro uses POSIX threads for multi-threaded programming,
defined in the ‘pthread.h’ system wide header.  There are various
resources for learning to use POSIX threads.  An excellent tutorial
(https://computing.llnl.gov/tutorials/pthreads/) is provided by the
Lawrence Livermore National Laboratory, with abundant figures to better
understand the concepts, it is a very good start.  The book 'Advanced
programming in the Unix environment'(1), by Richard Stevens and Stephen
Rago, Addison-Wesley, 2013 (Third edition) also has two chapters
explaining the POSIX thread constructs which can be very helpful.

   An alternative to POSIX threads was OpenMP, but POSIX threads are low
level, allowing much more control, while being easier to understand, see
*note Why C::.  All the situations where threads are used in Gnuastro
currently are completely independent with no need of coordination
between the threads.  Such problems are known as "embarrassingly
parallel" problems.  They are some of the simplest problems to solve
with threads and are also the ones that benefit most from them, see the
LLNL introduction(2).

   One very useful POSIX thread concept is ‘pthread_barrier’.
Unfortunately, it is only an optional feature in the POSIX standard, so
some operating systems do not include it.  Therefore in *note
Implementation of pthread_barrier::, we introduce our own
implementation.  This is a rather technical section only necessary for
more technical readers and you can safely ignore it.  Following that, we
describe the helper functions in this header that can greatly simplify
writing a multi-threaded program, see *note Gnuastro's thread related
functions:: for more.

* Menu:

* Implementation of pthread_barrier::  Some systems do not have pthread_barrier
* Gnuastro's thread related functions::  Functions for managing threads.

   ---------- Footnotes ----------

   (1) Do not let the title scare you!  The two chapters on
Multi-threaded programming are very self-sufficient and do not need any
more knowledge than K&R.

   (2) <https://computing.llnl.gov/tutorials/parallel_comp/>


File: gnuastro.info,  Node: Implementation of pthread_barrier,  Next: Gnuastro's thread related functions,  Prev: Multithreaded programming,  Up: Multithreaded programming

12.3.2.1 Implementation of ‘pthread_barrier’
............................................

One optional feature of the POSIX Threads standard is the
‘pthread_barrier’ concept.  It is a very useful high-level construct
that allows for independent threads to "wait" behind a "barrier" for the
rest after they finish.  Barriers can thus greatly simplify the code in
a multi-threaded program, so they are heavily used in Gnuastro.
However, since it is an optional feature in the POSIX standard, some
operating systems do not include it.  So to make Gnuastro portable, we
have written our own implementation of those ‘pthread_barrier’
functions.

   At ‘./configure’ time, Gnuastro will check if ‘pthread_barrier’
constructs are available on your system or not.  If ‘pthread_barrier’ is
not available, our internal implementation will be compiled into the
Gnuastro library and the definitions and declarations below will be
usable in your code with ‘#include <gnuastro/threads.h>’.

 -- Type: pthread_barrierattr_t
     Type to specify the attributes of a POSIX threads barrier.

 -- Type: pthread_barrier_t
     Structure defining the POSIX threads barrier.

 -- Function:
          int
          pthread_barrier_init (pthread_barrier_t *b,
          pthread_barrierattr_t *attr, unsigned int limit)
     Initialize the barrier ‘b’, with the attributes ‘attr’ and total
     ‘limit’ (a number of) threads that must wait behind it.  This
     function must be called before spinning off threads.

 -- Function:
          int
          pthread_barrier_wait (pthread_barrier_t *b)
     This function is called within each thread, just before it is ready
     to return.  Once a thread's function hits this, it will "wait"
     until all the other functions are also finished.

 -- Function:
          int
          pthread_barrier_destroy (pthread_barrier_t *b)
     Destroy all the information in the barrier structure.  This should
     be called by the function that spun-off the threads after all the
     threads have finished.

     *Destroy a barrier before re-using it:* It is very important to
     destroy the barrier before (possibly) reusing it.  This destroy
     function not only destroys the internal structures, it also waits
     (in 1 microsecond intervals, so you will not notice!)  until all
     the threads do not need the barrier structure any more.  If you
     immediately start spinning off new threads with a not-destroyed
     barrier, then the internal structure of the remaining threads will
     get mixed with the new ones and you will get very strange and
     apparently random errors that are extremely hard to debug.


File: gnuastro.info,  Node: Gnuastro's thread related functions,  Prev: Implementation of pthread_barrier,  Up: Multithreaded programming

12.3.2.2 Gnuastro's thread related functions
............................................

The POSIX Threads functions offered in the C library are very low-level
and offer a great range of control over the properties of the threads.
So if you are interested in customizing your tools for complicated
thread applications, it is strongly encouraged to get a nice familiarity
with them.  Some resources were introduced in *note Multithreaded
programming::.

   However, in many cases used in astronomical data analysis, you do not
need communication between threads and each target operation can be done
independently.  Since such operations are very common, Gnuastro provides
the tools below to facilitate the creation and management of jobs
without any particular knowledge of POSIX Threads for such operations.
The most interesting high-level functions of this section are the
‘gal_threads_number’ and ‘gal_threads_spin_off’ that identify the number
of threads on the system and spin-off threads.  You can see a
demonstration of using these functions in *note Library demo -
multi-threaded operation::.

 -- C struct: gal_threads_params
     Structure keeping the parameters of each thread.  When each thread
     is created, a pointer to this structure is passed to it.  The
     ‘params’ element can be the pointer to a structure defined by the
     user which contains all the necessary parameters to pass onto the
     worker function.  The rest of the elements within this structure
     are set internally by ‘gal_threads_spin_off’ and are relevant to
     the worker function.
          struct gal_threads_params
          {
            size_t            id; /* Id of this thread.                  */
            void         *params; /* User-identified pointer.            */
            size_t       *indexs; /* Target indices given to this thread. */
            pthread_barrier_t *b; /* Barrier for all threads.            */
          };

 -- Function:
          size_t
          gal_threads_number ()
     Return the number of threads that the operating system has
     available for your program.  This number is usually fixed for a
     single machine and does not change.  So this function is useful
     when you want to run your program on different machines (with
     different CPUs).

 -- Function:
          void
          gal_threads_spin_off (void *(*worker)(void *), void
          *caller_params, size_t numactions, size_t numthreads, size_t
          minmapsize, int quietmmap)
     Distribute ‘numactions’ jobs between ‘numthreads’ threads and
     spin-off each thread by calling the ‘worker’ function.  The
     ‘caller_params’ pointer will also be passed to ‘worker’ as part of
     the ‘gal_threads_params’ structure.  For a fully working example of
     this function, please see *note Library demo - multi-threaded
     operation::.

     If there are many jobs (millions or billions) to organize, memory
     issues may become important.  With ‘minmapsize’ you can specify the
     minimum byte-size to allocate the necessary space in a
     memory-mapped file or alternatively in RAM. If ‘quietmmap’ is
     non-zero, then a warning will be printed upon creating a
     memory-mapped file.  For more on Gnuastro's memory management, see
     *note Memory management::.

 -- Function:
          void
          gal_threads_attr_barrier_init (pthread_attr_t *attr,
          pthread_barrier_t *b, size_t limit)
     This is a low-level function in case you do not want to use
     ‘gal_threads_spin_off’.  It will initialize the general thread
     attribute ‘attr’ and the barrier ‘b’ with ‘limit’ threads to wait
     behind the barrier.  For maximum efficiency, the threads
     initialized with this function will be detached.  Therefore no
     communication is possible between these threads and in particular
     ‘pthread_join’ will not work on these threads.  You have to use the
     barrier constructs to wait for all threads to finish.

 -- Function:
          char *
          gal_threads_dist_in_threads (size_t numactions, size_t
          numthreads, size_t minmapsize, int quietmmap, size_t **indexs,
          size_t *icols)
     This is a low-level function in case you do not want to use
     ‘gal_threads_spin_off’.  The job of this function is to distribute
     ‘numactions’ jobs/actions in ‘numthreads’ threads.  To do this, it
     will assign each job an ID, ranging from 0 to ‘numactions’-1.  The
     output is the allocated ‘*indexs’ array and the ‘*icols’ number.
     In memory, it is just a simple 1D array that has ‘numthreads’
     $\times$ ‘*icols’ elements.  But you can visualize it as a 2D array
     with ‘numthreads’ rows and ‘*icols’ columns.  For more on the logic
     of the distribution, see below.

     When you have millions/billions of jobs to distribute, ‘indexs’
     will become very large.  For memory management (when to use a
     memory-mapped file, and when to use RAM), you need to specify the
     ‘minmapsize’ and ‘quietmmap’ arguments.  For more on memory
     management, see *note Memory management::.  In general, if your
     distributed jobs will not be on the scale of billions (and you want
     everything to always be written in RAM), just set ‘minmapsize=-1’
     and ‘quietmmap=1’.

     When ‘indexs’ is actually in a memory-mapped file, this function
     will return a string containing the name of the file (that you can
     later give to ‘gal_pointer_mmap_free’ to free/delete).  When
     ‘indexs’ is in RAM, this function will return a ‘NULL’ pointer.  So
     after you are finished with ‘indexs’, you can free it like this:

          char *mmapname;
          int quietmmap=1;
          size_t *indexs, thrdcols;
          size_t numactions=5000, minmapsize=-1;
          size_t numthreads=gal_threads_number();

          /* Distribute the jobs. */
          mmapname=gal_threads_dist_in_threads(numactions, numthreads,
                                               minmapsize, quietmmap,
                                               &indexs, &thrdcols);

          /* Do any processing you want... */

          /* Free the 'indexs' array. */
          if(mmapname) gal_pointer_mmap_free(&mmapname, quietmmap);
          else         free(indexs);

     Here is a brief description of the reasoning behind the ‘indexs’
     array and how the jobs are distributed.  Let's assume you have $A$
     actions (where there is only one function and the input values
     differ for each action) and $T$ threads available to the system
     with $A>T$ (common values for these two would be $A>1000$ and
     $T<10$).  Spinning off a thread is not a cheap job and requires a
     significant number of CPU cycles.  Therefore, creating $A$ threads
     is not the best way to address such a problem.  The most efficient
     way to manage the actions is such that only $T$ threads are
     created, and each thread works on a list of actions identified for
     it in series (one after the other).  This way your CPU will get all
     the actions done with minimal overhead.

     The purpose of this function is to do what we explained above: each
     row in the ‘indexs’ array contains the indices of actions which
     must be done by one thread (so it has ‘numthreads’ rows with
     ‘*icols’ columns).  However, when using ‘indexs’, you do not have
     to know the number of columns.  It is guaranteed that all the rows
     finish with ‘GAL_BLANK_SIZE_T’ (see *note Library blank values::).
     The ‘GAL_BLANK_SIZE_T’ macro plays a role very similar to a
     string's ‘\0’: every row finishes with this macro, so can easily
     stop parsing the indexes in the row as soon as you confront
     ‘GAL_BLANK_SIZE_T’.  For some real examples, please see the example
     program in ‘tests/lib/multithread.c’ for a demonstration.


File: gnuastro.info,  Node: Library data types,  Next: Pointers,  Prev: Multithreaded programming,  Up: Gnuastro library

12.3.3 Library data types (‘type.h’)
------------------------------------

Data in astronomy can have many types, numeric (numbers) and strings
(names, identifiers).  The former can also be divided into integers and
floats, see *note Numeric data types:: for a thorough discussion of the
different numeric data types and which one is useful for different
contexts.

   To deal with the very large diversity of types that are available
(and used in different contexts), in Gnuastro each type is identified
with global integer variable with a fixed name, this variable is then
passed onto functions that can work on any type or is stored in
Gnuastro's *note Generic data container:: as one piece of meta-data.

   The actual values within these integer constants is irrelevant and
you should never rely on them.  When you need to check, explicitly use
the named variable in the table below.  If you want to check with more
than one type, you can use C's ‘switch’ statement.

   Since Gnuastro heavily deals with file input-output, the types it
defines are fixed width types, these types are portable to all systems
and are defined in the standard C header ‘stdint.h’.  You do not need to
include this header, it is included by any Gnuastro header that deals
with the different types.  However, the most commonly used types in a C
(or C++) program (for example, ‘int’ or ‘long’) are not defined by their
exact width (storage size), but by their minimum storage.  So for
example, on some systems, ‘int’ may be 2 bytes (16-bits, the minimum
required by the standard) and on others it may be 4 bytes (32-bits,
common in modern systems).

   With every type, a unique "blank" value (or place-holder showing the
absence of data) can be defined.  Please see *note Library blank
values:: for constants that Gnuastro recognizes as a blank value for
each type.  See *note Numeric data types:: for more explanation on the
limits and particular aspects of each type.

 -- Global integer: GAL_TYPE_INVALID
     This is just a place-holder to specifically mark that no type has
     been set.

 -- Global integer: GAL_TYPE_BIT
     Identifier for a bit-stream.  Currently no program in Gnuastro
     works directly on bits, but features will be added in the future.

 -- Global integer: GAL_TYPE_UINT8
     Identifier for an unsigned, 8-bit integer type: ‘uint8_t’ (from
     ‘stdint.h’), or an ‘unsigned char’ in most modern systems.

 -- Global integer: GAL_TYPE_INT8
     Identifier for a signed, 8-bit integer type: ‘int8_t’ (from
     ‘stdint.h’), or an ‘signed char’ in most modern systems.

 -- Global integer: GAL_TYPE_UINT16
     Identifier for an unsigned, 16-bit integer type: ‘uint16_t’ (from
     ‘stdint.h’), or an ‘unsigned short’ in most modern systems.

 -- Global integer: GAL_TYPE_INT16
     Identifier for a signed, 16-bit integer type: ‘int16_t’ (from
     ‘stdint.h’), or a ‘short’ in most modern systems.

 -- Global integer: GAL_TYPE_UINT32
     Identifier for an unsigned, 32-bit integer type: ‘uint32_t’ (from
     ‘stdint.h’), or an ‘unsigned int’ in most modern systems.

 -- Global integer: GAL_TYPE_INT32
     Identifier for a signed, 32-bit integer type: ‘int32_t’ (from
     ‘stdint.h’), or an ‘int’ in most modern systems.

 -- Global integer: GAL_TYPE_UINT64
     Identifier for an unsigned, 64-bit integer type: ‘uint64_t’ (from
     ‘stdint.h’), or an ‘unsigned long’ in most modern 64-bit systems.

 -- Global integer: GAL_TYPE_INT64
     Identifier for a signed, 64-bit integer type: ‘int64_t’ (from
     ‘stdint.h’), or an ‘long’ in most modern 64-bit systems.

 -- Global integer: GAL_TYPE_INT
     Identifier for a ‘int’ type.  This is just an alias to ‘int16’, or
     ‘int32’ types, depending on the system.

 -- Global integer: GAL_TYPE_UINT
     Identifier for a ‘unsigned int’ type.  This is just an alias to
     ‘uint16’, or ‘uint32’ types, depending on the system.

 -- Global integer: GAL_TYPE_ULONG
     Identifier for a ‘unsigned long’ type.  This is just an alias to
     ‘uint32’, or ‘uint64’ types for 32-bit, or 64-bit systems
     respectively.

 -- Global integer: GAL_TYPE_LONG
     Identifier for a ‘long’ type.  This is just an alias to ‘int32’, or
     ‘int64’ types for 32-bit, or 64-bit systems respectively.

 -- Global integer: GAL_TYPE_SIZE_T
     Identifier for a ‘size_t’ type.  This is just an alias to ‘uint32’,
     or ‘uint64’ types for 32-bit, or 64-bit systems respectively.

 -- Global integer: GAL_TYPE_FLOAT32
     Identifier for a 32-bit single precision floating point type or
     ‘float’ in C.

 -- Global integer: GAL_TYPE_FLOAT64
     Identifier for a 64-bit double precision floating point type or
     ‘double’ in C.

 -- Global integer: GAL_TYPE_COMPLEX32
     Identifier for a complex number composed of two ‘float’ types.
     Note that the complex type is not yet fully implemented in all
     Gnuastro's programs.

 -- Global integer: GAL_TYPE_COMPLEX64
     Identifier for a complex number composed of two ‘double’ types.
     Note that the complex type is not yet fully implemented in all
     Gnuastro's programs.

 -- Global integer: GAL_TYPE_STRING
     Identifier for a string of characters (‘char *’).

 -- Global integer: GAL_TYPE_STRLL
     Identifier for a linked list of string of characters
     (‘gal_list_str_t’, see *note List of strings::).

The functions below are defined to make working with the integer
constants above easier.  In the functions below, the constants above can
be used for the ‘type’ input argument.

 -- Function:
          size_t
          gal_type_sizeof (uint8_t type)
     Return the number of bytes occupied by ‘type’.  Internally, this
     function uses C's ‘sizeof’ operator to measure the size of each
     type.  For strings, this function will return the size of ‘char *’.

 -- Function:
          char *
          gal_type_name (uint8_t type, int long_name)
     Return a string literal that contains the name of ‘type’.  It can
     return both short and long formats of the type names (for example,
     ‘f32’ and ‘float32’).  If ‘long_name’ is non-zero, the long format
     will be returned, otherwise the short name will be returned.  The
     output string is statically allocated, so it should not be freed.
     This function is the inverse of the ‘gal_type_from_name’ function.
     For the full list of names/strings that this function will return,
     see *note Numeric data types::.

 -- Function:
          uint8_t
          gal_type_from_name (char *str)
     Return the Gnuastro integer constant that corresponds to the string
     ‘str’.  This function is the inverse of the ‘gal_type_name’
     function and accepts both the short and long formats of each type.
     For the full list of names/strings that this function will return,
     see *note Numeric data types::.

 -- Function:
          void
          gal_type_min (uint8_t type, void *in)
     Put the minimum possible value of ‘type’ in the space pointed to by
     ‘in’.  Since the value can have any type, this function does not
     return anything, it assumes the space for the given type is
     available to ‘in’ and writes the value there.  Here is one example
          int32_t min;
          gal_type_min(GAL_TYPE_INT32, &min);

     Note: Do not use the minimum value for a blank value of a general
     (initially unknown) type, please use the constants/functions
     provided in *note Library blank values:: for the definition and
     usage of blank values.

 -- Function:
          void
          gal_type_max (uint8_t type, void *in)
     Put the maximum possible value of ‘type’ in the space pointed to by
     ‘in’.  Since the value can have any type, this function does not
     return anything, it assumes the space for the given type is
     available to ‘in’ and writes the value there.  Here is one example
          uint16_t max;
          gal_type_max(GAL_TYPE_INT16, &max);

     Note: Do not use the maximum value for a blank value of a general
     (initially unknown) type, please use the constants/functions
     provided in *note Library blank values:: for the definition and
     usage of blank values.

 -- Function:
          int
          gal_type_is_int (uint8_t type)
     Return 1 if the type is an integer (any width and any sign).

 -- Function:
          int
          gal_type_is_list (uint8_t type)
     Return 1 if the type is a linked list and zero otherwise.

 -- Function:
          int
          gal_type_out (int first_type, int second_type)
     Return the larger of the two given types which can be used for the
     type of the output of an operation involving the two input types.

 -- Function:
          char *
          gal_type_bit_string (void *in, size_t size)
     Return the bit-string in the ‘size’ bytes that ‘in’ points to.  The
     string is dynamically allocated and must be freed afterwards.  You
     can use it to inspect the bits within one region of memory.  Here
     is one short example:

          int32_t a=2017;
          char *bitstr=gal_type_bit_string(&a, 4);
          printf("%d: %s (%X)\n", a, bitstr, a);
          free(bitstr);

     which will produce:
          2017: 11100001000001110000000000000000  (7E1)

     As the example above shows, the bit-string is not the most
     efficient way to inspect bits.  If you are familiar with
     hexadecimal notation, it is much more compact, see
     <https://en.wikipedia.org/wiki/Hexadecimal>.  You can use
     ‘printf’'s ‘%x’ or ‘%X’ to print integers in hexadecimal format.

 -- Function:
          char *
          gal_type_to_string (void *ptr, uint8_t type, int
          quote_if_str_has_space);
     Read the contents of the memory that ‘ptr’ points to (assuming it
     has type ‘type’ and print it into an allocated string which is
     returned.

     If the memory is a string of characters and
     ‘quote_if_str_has_space’ is non-zero, the output string will have
     double-quotes around it if it contains space characters.  Also,
     note that in this case, ‘ptr’ must be a pointer to an array of
     characters (or ‘char **’), as in the example below (which will put
     ‘"sample string"’ into ‘out’):

          char *out, *string="sample string"
          out = gal_type_to_string(&string, GAL_TYPE_STRING, 1);

 -- Function:
          int
          gal_type_from_string (void **out, char *string, uint8_t type)
     Read a string as a given data type and put a pointer to it in
     ‘*out’.  When ‘*out!=NULL’, then it is assumed to be already
     allocated and the value will be simply put there.  If ‘*out==NULL’,
     then space will be allocated for the given type and the string will
     be read into that type.

     Note that when we are dealing with a string type, ‘*out’ should be
     interpreted as ‘char **’ (one element in an array of pointers to
     different strings).  In other words, ‘out’ should be ‘char ***’.

     This function can be used to fill in arrays of numbers from strings
     (in an already allocated data structure), or add nodes to a linked
     list (if the type is a list type).  For an array, you have to pass
     the pointer to the ‘i’th element where you want the value to be
     stored, for example, ‘&(array[i])’.

     If the string was successfully parsed to the requested type, this
     function will return a ‘0’ (zero), otherwise it will return ‘1’
     (one).  This output format will help you check the status of the
     conversion in a code like the example below where we will try
     reading a string as a single precision floating point number.

          float out;
          void *outptr=&out;
          if( gal_type_from_string(&outptr, string, GAL_TYPE_FLOAT32) )
            {
              fprintf(stderr, "%s could not be read as float32\n", string);
              exit(EXIT_FAILURE);
            }

     When you need to read many numbers into an array, ‘out’ would be an
     array, and you can simply increment ‘outptr=out+i’ (where you
     increment ‘i’).

 -- Function:
          void *
          gal_type_string_to_number (char *string, uint8_t *type)
     Read ‘string’ into smallest type that can host the number, the
     allocated space for the number will be returned and the type of the
     number will be put into the memory that ‘type’ points to.  If
     ‘string’ could not be read as a number, this function will return
     ‘NULL’.

     This function first calls the C library's ‘strtod’ function to read
     ‘string’ as a double-precision floating point number.  When
     successful, it will check the value to put it in the smallest
     numerical data type that can handle it; for example, ‘120’ and
     ‘50000’ will be read as a signed 8-bit integer and unsigned 16-bit
     integer types.  When reading as an integer, the C library's
     ‘strtol’ function is used (in base-10) to parse the string again.
     This re-parsing as an integer is necessary because integers with
     many digits (for example, the Unix epoch seconds) will not be
     accurately stored as a floating point and we cannot use the result
     of ‘strtod’.

     When ‘string’ is successfully parsed as a number _and_ there is ‘.’
     in ‘string’, it will force the number into floating point types.
     For example, ‘"5"’ is read as an integer, while ‘"5."’ or ‘"5.0"’,
     or ‘"5.00"’ will be read as a floating point (single-precision).

     For floating point types, this function will count the number of
     significant digits and determine if the given string is single or
     double precision as described in *note Numeric data types::.

     For integers, negative numbers will always be placed in signed
     types (as expected).  If a positive integer falls below the maximum
     of a signed type of a certain width, it will be signed (for
     example, ‘10’ and ‘150’ will be defined as a signed and unsigned
     8-bit integer respectively).  In other words, even though ‘10’ can
     be unsigned, it will be read as a signed 8-bit integer.  This is
     done to respect the C implicit type conversion in binary operators,
     where signed integers will be interpreted as unsigned, when the
     other operand is an unsigned integer of the same width.

     For example, see the short program below.  It will print ‘-50 is
     larger than 100000’ (which is wrong!).  This happens because when a
     negative number is parsed as an unsigned, the value is effectively
     subtracted from the maximum and $4294967295-50$ is indeed larger
     than 100000 (recall that $4294967295$ is the largest unsigned
     32-bit integer, see *note Numeric data types::).

          #include <stdio.h>
          #include <stdlib.h>
          #include <stdint.h>

          int
          main(void)
          {
            int32_t  a=-50;
            uint32_t b=100000;
            printf("%d is %s than %d\n", a,
                   a>b ? "larger" : "less or equal", b);
            return 0;
          }

     However, if we read 100000 as a signed 32-bit integer, there will
     not be any problem and the printed sentence will be logically
     correct (for someone who does not know anything about numeric data
     types: users of your programs).  For the advantages of integers,
     see *note Integer benefits and pitfalls::.


File: gnuastro.info,  Node: Pointers,  Next: Library blank values,  Prev: Library data types,  Up: Gnuastro library

12.3.4 Pointers (‘pointer.h’)
-----------------------------

Pointers play an important role in the C programming language.  As the
name suggests, they _point_ to a byte in memory (like an address in a
city).  The C programming language gives you complete freedom in how to
use the byte (and the bytes that follow it).  Pointers are thus a very
powerful feature of C. However, as the saying goes: "With great power
comes great responsibility", so they must be approached with care.  The
functions in this header are not very complex, they are just wrappers
over some basic pointer functionality regarding pointer arithmetic and
allocation (in memory or HDD/SSD).

 -- Function:
          void *
          gal_pointer_increment (void *pointer, size_t increment,
          uint8_t type)
     Return a pointer to an element that is ‘increment’ elements ahead
     of ‘pointer’, assuming each element has type of ‘type’.  For the
     type codes, see *note Library data types::.

     When working with the ‘array’ elements of ‘gal_data_t’, we are
     actually dealing with ‘void *’ pointers.  However, pointer
     arithmetic does not apply to ‘void *’, because the system does not
     know how many bytes there are in each element to increment the
     pointer respectively.  This function will use the given ‘type’ to
     calculate where the incremented element is located in memory.

 -- Function:
          size_t
          gal_pointer_num_between (void *earlier, void *later, uint8_t
          type)
     Return the number of elements (in the given ‘type’) between
     ‘earlier’ and ‘later’.  For the type codes, see *note Library data
     types::).

 -- Function:
          void *
          gal_pointer_allocate (uint8_t type, size_t size, int clear,
          const char *funcname, const char *varname)
     Allocate an array of type ‘type’ with ‘size’ elements in RAM (for
     the type codes, see *note Library data types::).  If ‘clear!=0’,
     then the allocated space is set to zero (cleared).

     This is effectively just a wrapper around C's ‘malloc’ or ‘calloc’
     functions but takes Gnuastro's integer type codes and will also
     abort with a clear error if there the allocation was not
     successful.  The number of allocated bytes is the value given to
     ‘size’ that is multiplied by the returned value of
     ‘gal_type_sizeof’ for the given type.  So if you want to allocate
     space for an array of strings you should pass the type
     ‘GAL_TYPE_STRING’.  Otherwise, if you just want space for one
     string (for example, 6 bytes for ‘hello’, including the
     string-termination character), you should set the type
     ‘GAL_TYPE_UINT8’.

     When space cannot be allocated, this function will abort the
     program with a message containing the reason for the failure.
     ‘funcname’ (name of the function calling this function) and
     ‘varname’ (name of variable that needs this space) will be used in
     this error message if they are not ‘NULL’.  In most modern
     compilers, you can use the generic ‘__func__’ variable for
     ‘funcname’.  In this way, you do not have to manually copy and
     paste the function name or worry about it changing later
     (‘__func__’ was standardized in C99).  To use this function
     effectively and avoid memory leaks, make sure to free the allocated
     array after you are done with it.  Also, be mindful of any
     functions that make use of this function as they should also free
     any allocated arrays to maintain memory management and prevent
     issues with the system.

 -- Function:
          void *
          gal_pointer_allocate_ram_or_mmap (uint8_t type, size_t size,
          int clear, size_t minmapsize, char **mmapname, int quietmmap,
          const char *funcname, const char *varname)
     Allocate the given space either in RAM or in a memory-mapped file.
     This function is just a high-level wrapper to
     ‘gal_pointer_allocate’ (to allocate in RAM) or
     ‘gal_pointer_mmap_allocate’ (to use a memory-mapped file).  For
     more on memory management in Gnuastro, please see *note Memory
     management::.  The various arguments are more fully explained in
     the two functions above.

 -- Function:
          void *
          gal_pointer_mmap_allocate (size_t size, uint8_t type, int
          clear, char **mmapname)
     Allocate the necessary space to keep ‘size’ elements of type ‘type’
     in HDD/SSD (a file, not in RAM). For the type codes, see *note
     Library data types::.  If ‘clear!=0’, then the allocated space will
     also be cleared.  The allocation is done using C's ‘mmap’ function.
     The name of the file containing the allocated space is an allocated
     string that will be put in ‘*mmapname’.

     Note that the kernel does not allow an infinite number of memory
     mappings to files.  So it is not recommended to use this function
     with every allocation.  The best-case scenario to use this function
     is for arrays that are very large and can fill up the RAM. Keep the
     smaller arrays in RAM, which is faster and can have a
     (theoretically) unlimited number of allocations.

     When you are done with the dataset and do not need it anymore, do
     not use ‘free’ (the dataset is not in RAM). Just delete the file
     (and the allocated space for the filename) with the commands below,
     or simply use ‘gal_pointer_mmap_free’.

          remove(mmapname);
          free(mmapname);

 -- Function:
          void
          gal_pointer_mmap_free (char **mmapname, int quietmmap)
     "Free" (actually delete) the memory-mapped file that is named
     ‘*mmapname’, then free the string.  If ‘quietmmap’ is non-zero,
     then a warning will be printed for the user to know that the given
     file has been deleted.


File: gnuastro.info,  Node: Library blank values,  Next: Library data container,  Prev: Pointers,  Up: Gnuastro library

12.3.5 Library blank values (‘blank.h’)
---------------------------------------

When the position of an element in a dataset is important (for example,
a pixel in an image), a place-holder is necessary for the element if we
do not have a value to fill it with (for example, the CCD cannot read
those pixels).  We cannot simply shift all the other pixels to fill in
the one we have no value for.  In other cases, it often occurs that the
field of sky that you are studying is not a clean rectangle to nicely
fit into the boundaries of an image.  You need a way to separate the
pixels outside your scientific field from those inside it.  Blank values
act as these place holders in a dataset.  They have no usable value but
they have a position.

   Every type needs a corresponding blank value (see *note Numeric data
types:: and *note Library data types::).  Floating point types have a
unique value identified by IEEE known as Not-a-Number (or NaN) which is
a unique value that is recognized by the compiler.  However, integer and
string types do not have any standard value.  For integers, in Gnuastro
we take an extremum of the given type: for signed types (that allow
negatives), the minimum possible value is used as blank and for unsigned
types (that only accept positives), the maximum possible value is used.
To be generic and easy to read/write we define a macro for these blank
values and strongly encourage you only use these, and never make any
assumption on the value of a type's blank value.

   The IEEE NaN blank value type is defined to fail on any comparison,
so if you are dealing with floating point types, you cannot use equality
(a NaN will _not_ be equal to a NaN). If you know your dataset is
floating point, you can use the ‘isnan’ function in C's ‘math.h’ header.
For a description of numeric data types see *note Numeric data types::.
For the constants identifying integers, please see *note Library data
types::.

 -- Global integer: GAL_BLANK_UINT8
     Blank value for an unsigned, 8-bit integer.

 -- Global integer: GAL_BLANK_INT8
     Blank value for a signed, 8-bit integer.

 -- Global integer: GAL_BLANK_UINT16
     Blank value for an unsigned, 16-bit integer.

 -- Global integer: GAL_BLANK_INT16
     Blank value for a signed, 16-bit integer.

 -- Global integer: GAL_BLANK_UINT32
     Blank value for an unsigned, 32-bit integer.

 -- Global integer: GAL_BLANK_INT32
     Blank value for a signed, 32-bit integer.

 -- Global integer: GAL_BLANK_UINT64
     Blank value for an unsigned, 64-bit integer.

 -- Global integer: GAL_BLANK_INT64
     Blank value for a signed, 64-bit integer.

 -- Global integer: GAL_BLANK_INT
     Blank value for ‘int’ type (‘int16_t’ or ‘int32_t’ depending on the
     system.

 -- Global integer: GAL_BLANK_UINT
     Blank value for ‘int’ type (‘int16_t’ or ‘int32_t’ depending on the
     system.

 -- Global integer: GAL_BLANK_LONG
     Blank value for ‘long’ type (‘int32_t’ or ‘int64_t’ in 32-bit or
     64-bit systems).

 -- Global integer: GAL_BLANK_ULONG
     Blank value for ‘unsigned long’ type (‘uint32_t’ or ‘uint64_t’ in
     32-bit or 64-bit systems).

 -- Global integer: GAL_BLANK_SIZE_T
     Blank value for ‘size_t’ type (‘uint32_t’ or ‘uint64_t’ in 32-bit
     or 64-bit systems).

 -- Global integer: GAL_BLANK_FLOAT32
     Blank value for a single precision, 32-bit floating point type
     (IEEE NaN value).

 -- Global integer: GAL_BLANK_FLOAT64
     Blank value for a double precision, 64-bit floating point type
     (IEEE NaN value).

 -- Global integer: GAL_BLANK_STRING
     Blank value for string types (this is itself a string, it is not
     the ‘NULL’ pointer).

The functions below can be used to work with blank pixels.

 -- Function:
          void
          gal_blank_write (void *pointer, uint8_t type)
     Write the blank value for the given ‘type’ into the space that
     ‘pointer’ points to.  This can be used when the space is already
     allocated (for example, one element in an array or a statically
     allocated variable).

 -- Function:
          void *
          gal_blank_alloc_write (uint8_t type)
     Allocate the space required to keep the blank for the given data
     type ‘type’, write the blank value into it and return the pointer
     to it.

 -- Function:
          void
          gal_blank_initialize (gal_data_t *input)
     Initialize all the elements in the ‘input’ dataset to the blank
     value that corresponds to its type.  If ‘input’ is not a string,
     and is a tile over a larger dataset, only the region that the tile
     covers will be set to blank.  For strings, the full dataset will be
     initialized.

 -- Function:
          void
          gal_blank_initialize_array (void *array, size_t size, uint8_t
          type)
     Initialize all the elements in the ‘array’ to the blank value that
     corresponds to its type (identified with ‘type’), assuming the
     array has ‘size’ elements.

 -- Function:
          char *
          gal_blank_as_string (uint8_t type, int width)
     Write the blank value for the given data type ‘type’ into a string
     and return it.  The space for the string is dynamically allocated
     so it must be freed after you are done with it.  If ‘width!=0’,
     then the final string will be padded with white space characters to
     have the requested width if it is smaller.

 -- Function:
          int
          gal_blank_is (void *pointer, uint8_t type)
     Return 1 if the contents of ‘pointer’ (assuming a type of ‘type’)
     is blank.  Otherwise, return 0.  Note that this function only works
     on one element of the given type.  So if ‘pointer’ is an array,
     only its first element will be checked.  Therefore for strings, the
     type of ‘pointer’ is assumed to be ‘char *’.  To check if an
     array/dataset has blank elements or to find which elements in an
     array are blank, you can use ‘gal_blank_present’ or
     ‘gal_blank_flag’ respectively (described below).

 -- Function:
          int
          gal_blank_present (gal_data_t *input, int updateflag)
     Return 1 if the dataset has a blank value and zero if it does not.
     Before checking the dataset, this function will look at ‘input’'s
     flags.  If the ‘GAL_DATA_FLAG_BLANK_CH’ bit of ‘input->flag’ is on,
     this function will not do any check and will just use the
     information in the flags.  This can greatly speed up processing
     when a dataset needs to be checked multiple times.

     When the dataset's flags were not used and ‘updateflags’ is
     non-zero, this function will set the flags appropriately to avoid
     having to re-check the dataset in future calls.  When
     ‘updateflags==0’, this function has no side-effects on the dataset:
     it will not toggle the flags.

     If you want to re-check a dataset with the blank-value-check flag
     already set (for example, if you have made changes to it), then
     explicitly set the ‘GAL_DATA_FLAG_BLANK_CH’ bit to zero before
     calling this function.  When there are no other flags, you can just
     set the flags to zero (‘input->flag=0’), otherwise you can use this
     expression:

          input->flag &= ~GAL_DATA_FLAG_BLANK_CH;

 -- Function:
          size_t
          gal_blank_number (gal_data_t *input, int updateflag)
     Return the number of blank elements in ‘input’.  If
     ‘updateflag!=0’, then the dataset blank keyword flags will be
     updated.  See the description of ‘gal_blank_present’ (above) for
     more on these flags.  If ‘input==NULL’, then this function will
     return ‘GAL_BLANK_SIZE_T’.

 -- Function:
          gal_data_t *
          gal_blank_flag (gal_data_t *input)
     Return a "flag" dataset with the same size as the input, but with
     an ‘uint8_t’ type that has a value of 1 for data elements that are
     blank and 0 for those that are not.

 -- Function:
          gal_data_t *
          gal_blank_flag_not (gal_data_t *input)
     Return a "flag" dataset with the same size as the input, but with
     an ‘uint8_t’ type that has a value of 1 for data elements that are
     _not_ blank and 0 for those that are blank.

 -- Function:
          size_t *
          gal_blank_not_minmax_coords (gal_data_t *input)
     Find the minimum and maximum coordinates of the non-blank regions
     within the input dataset.  The coordinates are in C order: starting
     from 0, and with the slowest dimension being first.  The output is
     an allocated array (that should be freed later) with $2\times N$
     elements; where $N$ is the number of dimensions.  The first two
     elements contain the minimum and maximum of regions containing
     non-blank elements along the 0-th dimension (the slowest), the
     second two elements contain the next dimension's extrema; and so
     on.

 -- Function:
          gal_data_t *
          gal_blank_trim (gal_data_t *input, int inplace)
     Trim all the outer layers of blank values from the input dataset.
     For example in the 2D image, "layers" would correspond to columns
     or rows that are fully blank and touching the edge of the image.
     For a more complete description, see the description of the ‘trim’
     operator in *note Dimensionality changing operators::.

 -- Function:
          void
          gal_blank_flag_apply (gal_data_t *input, gal_data_t *flag)
     Set all non-zero and non-blank elements of ‘flag’ to blank in
     ‘input’.  ‘flag’ has to have an unsigned 8-bit type and be the same
     size as ‘input’.

 -- Function:
          void
          gal_blank_flag_remove (gal_data_t *input, gal_data_t *flag)
     Remove all elements within ‘input’ that are flagged, convert it to
     a 1D dataset and adjust the size properly (the number of
     non-flagged elements).  In practice this function does not‘realloc’
     the input array (see ‘gal_blank_remove_realloc’ for
     shrinking/re-allocating also), it just shifts the blank elements to
     the end and adjusts the size elements of the ‘gal_data_t’, see
     *note Generic data container::.

     Note that elements that are blank, but not flagged will not be
     removed.  This function will only remove flagged elements.

     If all the elements were flagged, then ‘input->size’ will be zero.
     This is thus a good parameter to check after calling this function
     to see if there actually were any non-flagged elements in the input
     or not and take the appropriate measure.  This check is highly
     recommended because it will avoid strange bugs in later steps.

 -- Function:
          void
          gal_blank_remove (gal_data_t *input)
     Remove blank elements from a dataset, convert it to a 1D dataset,
     adjust the size properly (the number of non-blank elements), and
     toggle the blank-value-related bit-flags.  In practice this
     function does not‘realloc’ the input array (see
     ‘gal_blank_remove_realloc’ for shrinking/re-allocating also), it
     just shifts the blank elements to the end and adjusts the size
     elements of the ‘gal_data_t’, see *note Generic data container::.

     If all the elements were blank, then ‘input->size’ will be zero.
     This is thus a good parameter to check after calling this function
     to see if there actually were any non-blank elements in the input
     or not and take the appropriate measure.  This check is highly
     recommended because it will avoid strange bugs in later steps.

 -- Function:
          void
          gal_blank_remove_realloc (gal_data_t *input)
     Similar to ‘gal_blank_remove’, but also shrinks/re-allocates the
     dataset's allocated memory.

 -- Function:
          gal_data_t *
          gal_blank_remove_rows (gal_data_t *columns, gal_list_sizet_t
          *column_indexs, int onlydim0)
     Remove (in place) any row that has at least one blank value in any
     of the input columns and return a "flag" dataset (that should be
     freed later).  The input ‘columns’ is a list of ‘gal_data_t’s (see
     *note List of gal_data_t::).  When ‘onlydim0!=0’ the vector columns
     (with 2 dimensions) will not be checked for the presence of blank
     values.

     After this function, all the elements in ‘columns’ will still have
     the same size as each other, but if any of the searched columns has
     blank elements, all their sizes will decrease together.

     The returned flag dataset has the same size as the original input
     dataset, with a type of ‘uint8_t’.  Every row that has been removed
     from the original dataset has a value of 1, and the rest have a
     value of 0.

     When ‘column_indexs!=NULL’, only the columns whose index (counting
     from zero) is in ‘column_indexs’ will be used to check for blank
     values (see *note List of size_t::.  Therefore, if you want to
     check all columns, just set this to ‘NULL’.  In any case (no matter
     which columns are checked for blanks), the selected rows from all
     columns will be removed.