gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 5005d2ce 2/2: Book: unified format for citing


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 5005d2ce 2/2: Book: unified format for citing arXiv papers
Date: Tue, 16 Jan 2024 16:36:49 -0500 (EST)

branch: master
commit 5005d2ce6ee2d851bf3f3c4b83f76cf6927f1ad8
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Book: unified format for citing arXiv papers
    
    Until now, within the book we used multiple ways to cite arXiv papers
    (sometimes printing the arXiv ID, sometimes the year and sometimes the full
    author+year).
    
    With this commit, a single format is used for all such links (only the year
    is a link (as in published astronomical papers).
---
 doc/gnuastro.texi | 149 ++++++++++++++++++++++++++----------------------------
 1 file changed, 73 insertions(+), 76 deletions(-)

diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index da16f7d8..85fa7e31 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -1138,7 +1138,7 @@ It can also build the profiles on an over-sampled image.
 
 @item NoiseChisel
 (@file{astnoisechisel}, see @ref{NoiseChisel}) Detect signal in noise.
-It uses a technique to detect very faint and diffuse, irregularly shaped 
signal in noise (galaxies in the sky), using thresholds that are below the Sky 
value, see @url{http://arxiv.org/abs/1505.01664, arXiv:1505.01664}.
+It uses a technique to detect very faint and diffuse, irregularly shaped 
signal in noise (galaxies in the sky), using thresholds that are below the Sky 
value, see Akhlaghi and Ichikawa @url{http://arxiv.org/abs/1505.01664,2015}.
 
 @item Query
 (@file{astquery}, see @ref{Query}) High-level interface to query pre-defined 
remote, or external databases, and directly download the required sub-tables on 
the command-line.
@@ -1258,7 +1258,7 @@ This kind of subjective experience is prone to serious 
misunderstandings about t
 This attitude is further encouraged through non-free 
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly 
written (or non-existent) scientific software manuals, and non-reproducible 
papers@footnote{Where the authors omit many of the analysis/processing 
``details'' from the paper by arguing that they would make the paper too 
long/unreadable.
 However, software engineers have been dealing with such issues for a long time.
 There are thus software management solutions that allow us to supplement 
papers with all the details necessary to exactly reproduce the result.
-For example, see Akhlaghi et al. (2021, 
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).}.
+For example, see Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}.}.
 This approach to scientific software and methods only helps in producing 
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in 
his personal knowledge and authority}''@footnote{Karl Popper. The logic of 
scientific discovery. 1959.
 Larger quote is given at the start of the PDF (for print) version of this 
book.}.
 
@@ -1324,7 +1324,7 @@ The Actual Scholarship is the complete software 
development environment and the
 Today, the quality of the source code that goes into a scientific result (and 
the distribution of that code) is as critical to scientific vitality and 
integrity, as the quality of its written language/English used in 
publishing/distributing its paper.
 A scientific paper will not even be reviewed by any respectable journal if its 
written in a poor language/English.
 A similar level of quality assessment is thus increasingly becoming necessary 
regarding the codes/methods used to derive the results of a scientific paper.
-For more on this, please see Akhlaghi et al. (2021) at 
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).
+For more on this, please see Akhlaghi et al. 
@url{https://arxiv.org/abs/2006.03018,2021}).
 
 @cindex Ken Thomson
 @cindex Stroustrup, Bjarne
@@ -1557,7 +1557,7 @@ In other words,``the whole system is basically GNU with 
Linux loaded''.
 
 You can replace the Linux kernel and still have the GNU shell and higher-level 
utilities.
 For example, using the ``Windows Subsystem for Linux'', you can use almost all 
GNU tools without the original Linux kernel, but using the host Windows 
operating system, as in @url{https://ubuntu.com/wsl}.
-Alternatively, you can build a fully functional GNU-based working environment 
on a macOS or BSD-based operating system (using the host's kernel and C 
compiler), for example, through projects like Maneage (see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. 2021}, and its Appendix 
C with all the GNU software tools that is exactly reproducible on a macOS also).
+Alternatively, you can build a fully functional GNU-based working environment 
on a macOS or BSD-based operating system (using the host's kernel and C 
compiler), for example, through projects like Maneage, see Akhlaghi et al. 
@url{https://arxiv.org/abs/2006.03018,2021}, in particular Appendix C with all 
the GNU software tools that is exactly reproducible on a macOS also.
 
 Therefore to acknowledge GNU's instrumental role in the creation and usage of 
the Linux kernel and the operating systems that use it, we should call these 
operating systems ``GNU/Linux''.
 
@@ -2763,7 +2763,7 @@ The format of their expected value is also shown as 
@code{FLT}, @code{INT} or @c
 
 You can see the values of all options that need one with the 
@option{--printparams} option (or its short format: @code{-P}).
 @option{--printparams} is common to all programs (see @ref{Common options}).
-You can see the default cosmological parameters (from the 
@url{https://arxiv.org/abs/1807.06209, Plank 2018 results}) under the @code{# 
Input:} title:
+You can see the default cosmological parameters, from the Plank collaboration 
@url{https://arxiv.org/abs/1807.06209,2020}, under the @code{# Input:} title:
 
 @example
 $ astcosmiccal -P
@@ -3102,7 +3102,7 @@ In terms of the number of exposures (and thus correlated 
noise), the XDF dataset
 Therefore the default parameters need to be slightly customized.
 It is the result of warping and adding roughly 80 separate exposures which can 
create strong correlated noise/smoothing.
 In common surveys the number of exposures is usually 10 or less.
-See Figure 2 of @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]} and 
the discussion on @option{--detgrowquant} there for more on how NoiseChisel 
``grow''s the detected objects and the patterns caused by correlated noise.
+See Figure 2 of Akhlaghi @url{https://arxiv.org/abs/1909.11230, 2019} and the 
discussion on @option{--detgrowquant} there for more on how NoiseChisel 
``grow''s the detected objects and the patterns caused by correlated noise.
 
 Let's tweak NoiseChisel's configuration a little to get a better result on 
this dataset.
 Do Not forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
@@ -3119,7 +3119,7 @@ $ astnoisechisel --help | grep check
 @end example
 
 Let's check the overall detection process to get a better feeling of what 
NoiseChisel is doing with the following command.
-To learn the details of NoiseChisel in more detail, please see 
@ref{NoiseChisel}, @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
+To learn the details of NoiseChisel in more detail, please see 
@ref{NoiseChisel}, Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} and Akhlaghi 
@url{https://arxiv.org/abs/1909.11230,2019}.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
@@ -3192,7 +3192,7 @@ In this particular case, try @option{--dthresh=0.2} and 
you will see that the to
 So for this type of very deep HST images, we should set @option{--dthresh=0.1}.
 @end cartouche
 
-As discussed in Section 3.1.5 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}, the signal-to-noise ratio of pseudo-detections 
are critical to identifying/removing false detections.
+As discussed in Section 3.1.5 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}, the signal-to-noise ratio of 
pseudo-detections are critical to identifying/removing false detections.
 For an optimal detection they are very important to get right (where you want 
to detect the faintest and smallest objects in the image successfully).
 Let's have a look at their signal-to-noise distribution with 
@option{--checksn}.
 
@@ -4355,7 +4355,7 @@ $ make -j12
 @noindent
 Did you notice how much faster this one was?
 When possible, it is always very helpful to do your analysis in parallel.
-You can build very complex workflows with Make, for example, see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. (2021)} so it is worth 
spending some time to master.
+You can build very complex workflows with Make, for example, see Akhlaghi 
@url{https://arxiv.org/abs/2006.03018,2021} so it is worth spending some time 
to master.
 
 @node FITS images in a publication, Marking objects for publication, Reddest 
clumps cutouts and parallelization, General program usage tutorial
 @subsection FITS images in a publication
@@ -5270,7 +5270,7 @@ Therefore, when there are large objects in the dataset, 
@strong{the best place}
 When dominated by the background, noise has a symmetric distribution.
 However, signal is not symmetric (we do not have negative signal).
 Therefore when non-constant@footnote{by constant, we mean that it has a single 
value in the region we are measuring.} signal is present in a noisy dataset, 
the distribution will be positively skewed.
-For a demonstration, see Figure 1 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
+For a demonstration, see Figure 1 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 This skewness is a good measure of how much faint signal we have in the 
distribution.
 The skewness can be accurately measured by the difference in the mean and 
median (assuming no strong outliers): the more distant they are, the more 
skewed the dataset is.
 This important concept will be discussed more extensively in the next section 
(@ref{Skewness caused by signal and its measurement}).
@@ -5278,7 +5278,7 @@ This important concept will be discussed more extensively 
in the next section (@
 However, skewness is only a proxy for signal when the signal has structure 
(varies per pixel).
 Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the constant signal's effect is just to shift the symmetric center 
of the noise distribution to the positive and there will not be any skewness 
(major difference between the mean and median).
 This positive@footnote{In processed images, where the Sky value can be 
over-estimated, this constant shift can be negative.} shift that preserves the 
symmetric distribution is the Sky value.
-When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example, see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example, see Figure 11 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 
 To make this very large diffuse/flat signal detectable, you will therefore 
need a larger tile to contain a larger change in the values within it (and 
improve number statistics, for less scatter when measuring the mean and median).
 So let's play with the tessellation a little to see how it affects the result.
@@ -5309,7 +5309,7 @@ $ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to 
fit
 @end example
 
 The first extension (called @code{CONVOLVED}) of @file{r_qthresh.fits} is the 
convolved input image where the threshold(s) is(are) defined (and later applied 
to).
-For more on the effect of convolution and thresholding, see Sections 3.1.1 and 
3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+For more on the effect of convolution and thresholding, see Sections 3.1.1 and 
3.1.2 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
 The second extension (@code{QTHRESH_ERODE}) has a blank/white value for all 
the pixels of any tile that was identified as having significant signal.
 The other tiles have the measured threshold over them.
 The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are 
the other two quantile thresholds that are necessary in NoiseChisel's later 
steps.
@@ -5407,7 +5407,7 @@ Looking at the @code{OPENED-AND-LABELED} extension again, 
we see that the main/l
 While the immediately-outer connected regions are still present, they have 
decreased dramatically, so we can pass this step.
 
 After the @code{OPENED-AND-LABELED} extension, NoiseChisel goes onto finding 
false detections using the undetected pixels.
-The process is fully described in Section 3.1.5. (Defining and Removing False 
Detections) of arXiv:@url{https://arxiv.org/pdf/1505.01664.pdf,1505.01664}.
+The process is fully described in Section 3.1.5. (Defining and Removing False 
Detections) of Akhlaghi and Ichikawa 
@url{https://arxiv.org/pdf/1505.01664.pdf,2015}.
 Please compare the extensions to what you read there and things will be very 
clear.
 In the last HDU (@code{DETECTION-FINAL}), we have the final detected pixels 
that will be used to estimate the Sky and its Standard deviation.
 We see that the main detection has indeed been detected very far out, so let's 
see how the full NoiseChisel will estimate the Sky and its standard deviation 
(by removing @code{--checkdetection}):
@@ -5427,7 +5427,7 @@ This suggests that there is still undetected signal and 
that we can still dig de
 With the @option{--detgrowquant} option, NoiseChisel will ``grow'' the 
detections in to the noise.
 Its value is the ultimate limit of the growth in units of quantile (between 0 
and 1).
 Therefore @option{--detgrowquant=1} means no growth and 
@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is 
usually too much and will cover the whole image!).
-See Figure 2 of arXiv:@url{https://arxiv.org/pdf/1909.11230.pdf,1909.11230} 
for more on this option.
+See Figure 2 of Akhlaghi @url{https://arxiv.org/pdf/1909.11230.pdf,2019} for 
more on this option.
 Try running the previous command with various values (from 0.6 to higher 
values) to see this option's effect on this dataset.
 For this particularly huge galaxy (with signal that extends very gradually 
into the noise), we will set it to @option{0.75}:
 
@@ -5452,13 +5452,13 @@ $ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom 
to fit
 
 When looking at the raw input image (which is very ``shallow'': less than a 
minute exposure!), you do not see anything so far out of the galaxy.
 You might just think to yourself that ``this is all noise, I have just dug too 
deep and I'm following systematics''!
-If you feel like this, have a look at the deep images of this system in 
@url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour 
deep image of this system (with a 12-inch telescope):
+If you feel like this, have a look at the deep images of this system in 
Watkins @url{https://arxiv.org/abs/1501.04599,2015}, or a 12 hour deep image of 
this system (with a 12-inch telescope):
 @url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from 
this Reddit discussion: 
@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
 In these deeper images you clearly see how the outer edges of the M51 group 
follow this exact structure, below in @ref{Achieved surface brightness level}, 
we will measure the exact level.
 
 As the gradient in the @code{SKY} extension shows, and the deep images cited 
above confirm, the galaxy's signal extends even beyond this.
 But this is already far deeper than what most (if not all) other tools can 
detect.
-Therefore, we will stop configuring NoiseChisel at this point in the tutorial 
and let you play with the other options a little more, while reading more about 
it in the papers (@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}) and 
@ref{NoiseChisel}.
+Therefore, we will stop configuring NoiseChisel at this point in the tutorial 
and let you play with the other options a little more, while reading more about 
it in the papers: Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} and 
@url{https://arxiv.org/abs/1909.11230,2019}) and @ref{NoiseChisel}.
 When you do find a better configuration feel free to contact us for feedback.
 Do Not forget that good data analysis is an art, so like a sculptor, master 
your chisel for a good result.
 
@@ -6167,7 +6167,7 @@ See @ref{Segmentation and making a catalog} and 
@ref{Segment} for more on using
 @section Building the extended PSF
 
 Deriving the extended PSF of an image is very important in many aspects of the 
analysis of the objects within it.
-Gnuastro has a set of installed scripts, designed to simplify the process 
following the recipe of Infante-Sainz et al. (2020, 
@url{https://arxiv.org/abs/1911.01430}); for more, see @ref{PSF construction 
and subtraction}.
+Gnuastro has a set of installed scripts, designed to simplify the process 
following the recipe of Infante-Sainz et al. 
@url{https://arxiv.org/abs/1911.01430,2020}; for more, see @ref{PSF 
construction and subtraction}.
 An overview of the process is given in @ref{Overview of the PSF scripts}.
 
 @menu
@@ -6249,7 +6249,7 @@ To see this saturation noise run the last command again 
and in SAO DS9, set the
 You will see the noisy saturation pixels at the center of the star in red.
 
 This noise-at-the-peak disrupts Segment's assumption to expand clumps from a 
local maxima: each noisy peak is being treated as a separate local maxima and 
thus a separate clump.
-For more on how Segment defines clumps, see Section 3.2.1 and Figure 8 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi @& Ichikawa 2015}.
+For more on how Segment defines clumps, see Section 3.2.1 and Figure 8 of 
Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
 To have the center identified as a single clump, we should mask these 
saturated pixels in a way that suites Segment's non-parametric methodology.
 
 First we need to find the saturation level!
@@ -7883,7 +7883,7 @@ This type of data is generically (also outside of 
astronomy) known as Hyperspect
 For example high-level data products of Integral Field Units (IFUs) like 
MUSE@footnote{@url{https://en.wikipedia.org/wiki/Multi-unit_spectroscopic_explorer}}
 in the optical, 
ACIS@footnote{@url{https://en.wikipedia.org/wiki/Advanced_CCD_Imaging_Spectrometer}}
 in the X-ray, or in the radio where most data are 3D cubes.
 
 @cindex Abell 370 galaxy cluster
-In this tutorial, we'll use a small crop of a reduced deep MUSE cube centered 
on the @url{https://en.wikipedia.org/wiki/Abell_370, Abell 370} galaxy cluster 
from the Pilot-WINGS survey; see @url{https://arxiv.org/abs/2202.04663, 
Lagattuta et al. 2022}.
+In this tutorial, we'll use a small crop of a reduced deep MUSE cube centered 
on the @url{https://en.wikipedia.org/wiki/Abell_370, Abell 370} galaxy cluster 
from the Pilot-WINGS survey; see Lagattuta et al. 
@url{https://arxiv.org/abs/2202.04663,2022}.
 Abell 370 has a spiral galaxy in its background that is stretched due to the 
cluster's gravitational potential to create a beautiful arch.
 If you haven't seen it yet, have a look at some of its images in the Wikipedia 
link above before continuing.
 
@@ -8144,7 +8144,7 @@ Looking at the filtered cube above, and sliding through 
the different wavelength
 This is expected because each pixel's value is now calculated from 100 others 
(along the third dimension)!
 Using the same steps as @ref{Viewing spectra and redshifted lines}, plot the 
spectra of the brightest galaxy.
 Then, have a look at its spectra.
-You see that the emission lines have been significantly smoothed out to become 
almost@footnote{For more on why Sigma-clipping is only a crude solution to 
background removal, see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa 2015}.} invisible.
+You see that the emission lines have been significantly smoothed out to become 
almost@footnote{For more on why Sigma-clipping is only a crude solution to 
background removal, see Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.} invisible.
 
 You can now subtract this ``continuum'' cube from the input cube to create the 
emission-line cube.
 In fact, as you see below, we can do it in a single Arithmetic command 
(blending the filtering and subtraction in one command).
@@ -8972,7 +8972,7 @@ But we saw that showing the brighter and fainter parts of 
the galaxy in a single
 In this section, we will use Gnuastro's @command{astscript-color-faint-gray} 
installed script to address this problem and create images which visualize a 
major fraction of the contents of our astronomical data.
 
 This script aims to solve the problems mentioned in the previous section.
-See Infante-Sainz et al. (2024, @url{https://arxiv.org/abs/2401.03814}), which 
first introduced this script, for examples of the final images we will be 
producing in this tutorial.
+See Infante-Sainz et al. @url{https://arxiv.org/abs/2401.03814,2024}, which 
first introduced this script, for examples of the final images we will be 
producing in this tutorial.
 This script uses a non-linear transformation to modify the bright input values 
before combining them to produce the color image.
 Furthermore, for the faint regions of the image, it will use grayscale and 
avoid color over all (as we saw, colored noised is not too nice to look at!).
 The faint regions are also inverted: so the brightest pixel in the faint 
(black-and-white or grayscale) region is black and the faintest pixels will be 
white.
@@ -9303,7 +9303,7 @@ In certain situations, the combination of channels may 
not have a traditional co
 For instance, combining an X-ray channel with an optical filter and a 
far-infrared image can complicate the interpretation in terms of human 
understanding of color.
 But the physical interpretation remains valid as the different channels 
(colors in the output) represent different physical phenomena of astronomical 
sources.
 Another easier example is the use of narrow-band filters such as the H-alpha 
of J-PLUS survey.
-This is shown in the Bottom-right panel of Figure 1 by Infante-Sainz et al. 
(2024, @url{https://arxiv.org/abs/2401.03814}), in this case the G channel has 
been substituted by the image corresponding to the H-alpha filter to show the 
star formation regions.
+This is shown in the Bottom-right panel of Figure 1 by Infante-Sainz et al. 
@url{https://arxiv.org/abs/2401.03814,2024}, in this case the G channel has 
been substituted by the image corresponding to the H-alpha filter to show the 
star formation regions.
 Therefore, please use the weights with caution, as it can significantly affect 
the output and misinform your readers/viewers.
 
 If you do apply weights be sure to report the weights in the caption of the 
image (beside the filters that were used for each channel).
@@ -9344,7 +9344,7 @@ Two additional options are available to smooth different 
regions by convolving w
 The value specified for these options represents the full width at half 
maximum of the Gaussian kernel.
 
 Congratulations!
-By following the tutorial up to this point, we have been able to reproduce 
three images of Infante-Sainz et al. (2024, 
@url{https://arxiv.org/abs/2401.03814}).
+By following the tutorial up to this point, we have been able to reproduce 
three images of Infante-Sainz et al. 
@url{https://arxiv.org/abs/2401.03814,2024}.
 You can see the commands that were used to generate them within the 
reproducible source of that paper at 
@url{https://codeberg.org/gnuastro/papers/src/branch/color-faint-gray}.
 Remember that this paper is reproducible throught Maneage, so you can explore 
and build the entire paper by yourself.
 For more on Maneage, see Akhlaghi et al. 
@url{https://ui.adsabs.harvard.edu/abs/2021CSE....23c..82A, 2021}.
@@ -9352,7 +9352,7 @@ For more on Maneage, see Akhlaghi et al. 
@url{https://ui.adsabs.harvard.edu/abs/
 This tutorial provided a general overview of the various options to construct 
a color image from three different FITS images using the 
@command{astscript-color-faint-gray} script.
 Keep in mind that the optimal parameters for generating the best color image 
depend on your specific goals and the quality of your input images.
 We encourage you to follow this tutorial with the provided J-PLUS images and 
later with your own dataset.
-See @ref{Color images with gray faint regions} for more information, and 
please consider citing Infante-Sainz et al. (2024, 
@url{https://arxiv.org/abs/2401.03814}) if you use this script in your work 
(the full Bib@TeX{} entry of this paper will be given to you with the 
@option{--cite} option).
+See @ref{Color images with gray faint regions} for more information, and 
please consider citing Infante-Sainz et al. 
@url{https://arxiv.org/abs/2401.03814,2024} if you use this script in your work 
(the full Bib@TeX{} entry of this paper will be given to you with the 
@option{--cite} option).
 
 @node Zero point of an image, Pointing pattern design, Color images with full 
dynamic range, Tutorials
 @section Zero point of an image
@@ -14989,7 +14989,7 @@ In the process, you will confront many issues that have 
to be corrected (bugs in
 Make is a wonderful tool to manage this style of development.
 
 Besides the raw data analysis pipeline, Make has been used to for producing 
reproducible papers, for example, see 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} 
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
-In fact the NoiseChisel paper's Make-based workflow was the foundation of a 
parallel project called @url{http://maneage.org,Maneage} (@emph{Man}aging data 
lin@emph{eage}): @url{http://maneage.org} that is described more fully in 
Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018, 2021}.
+In fact the NoiseChisel paper's Make-based workflow was the foundation of a 
parallel project called @url{http://maneage.org,Maneage} (@emph{Man}aging data 
lin@emph{eage}): @url{http://maneage.org} that is described more fully in 
Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018,2021}.
 Therefore, it is a very useful tool for complex scientific workflows.
 
 @cindex GNU Make
@@ -15721,7 +15721,7 @@ With version control, experimentation in the project's 
analysis is greatly facil
 
 One important feature of version control is that the research result (FITS 
image, table, report or paper) can be stamped with the unique commit 
information that produced it.
 This information will enable you to exactly reproduce that same result later, 
even if you have made changes/progress.
-For one example of a research paper's reproduction pipeline, please see the 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of 
the @url{https://arxiv.org/abs/1505.01664, paper} describing @ref{NoiseChisel}.
+For one example of a research paper's reproduction pipeline, please see the 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of 
Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} describing 
@ref{NoiseChisel}.
 
 In case you don't want the @code{COMMIT} keyword in the first extension of 
your output FITS file, you can use the @option{--outfitsnocommit} option (see 
@ref{Input output options}).
 
@@ -17125,7 +17125,7 @@ Therefore the full script (except for the data download 
step) is available in @r
 PGFPlots uses the same @LaTeX{} graphic engine that typesets your paper/slide.
 Therefore when you build your plots and figures using PGFPlots (and its 
underlying package 
PGF/TiKZ@footnote{@url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}})
 your plots will blend beautifully within your text: same fonts, same colors, 
same line properties, etc.
 Since most papers (and presentation slides@footnote{To build slides, @LaTeX{} 
has packages like Beamer, see 
@url{http://mirrors.ctan.org/macros/latex/contrib/beamer/doc/beameruserguide.pdf}})
 are made with @LaTeX{}, PGFPlots is therefore the best tool for those who use 
@LaTeX{} to create documents.
-PGFPlots also does not need any extra dependencies beyond a basic/minimal 
@TeX{}-live installation, so it is much more reliable than tools like 
Matplotlib in Python that have hundreds of fast-evolving 
dependencies@footnote{See Figure 1 of Alliez et al. 2020 at 
@url{https://arxiv.org/pdf/1905.11123.pdf}}.
+PGFPlots also does not need any extra dependencies beyond a basic/minimal 
@TeX{}-live installation, so it is much more reliable than tools like 
Matplotlib in Python that have hundreds of fast-evolving 
dependencies@footnote{See Figure 1 of Alliez et al. 
@url{https://arxiv.org/abs/1905.11123,2019}.}.
 
 To demonstrate this, we will create a surface brightness image of a galaxy in 
the F160W filter of the ABYSS 
survey@footnote{@url{http://research.iac.es/proyecto/abyss}}.
 In the code-block below, let's make a ``build'' directory to keep intermediate 
files and avoid populating the source.
@@ -21009,7 +21009,7 @@ Its physical origin can be the brightness of the 
atmosphere (for ground-based in
 The Sky value is thus an ideal definition, because in real datasets, what lies 
deep in the noise (far lower than the detection limit) is never 
known@footnote{In a real image, a relatively large number of very faint objects 
can be fully buried in the noise and never detected.
 These undetected objects will bias the background measurement to slightly 
larger values.
 Our best approximation is thus to simply assume they are uniform, and consider 
their average effect.
-See Figure 1 (a.1 and a.2) and Section 2.2 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+See Figure 1 (a.1 and a.2) and Section 2.2 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.}.
 To account for all of these, the sky value is defined to be the average 
count/value of the undetected regions in the image.
 In a mock image/dataset, we have the luxury of setting the background (Sky) 
value.
 
@@ -26450,7 +26450,7 @@ The covariance matrix is also calculated, it is 
necessary to calculate error bar
 Finally, the reduced @mymath{\chi^2} (or @mymath{\chi_{red}^2}) of the fit is 
also printed (which was the measure to minimize).
 A @mymath{\chi_{red}^2\approx1} shows a good fit.
 This is good for real-world scenarios when you don't know the original values 
a-priori.
-For more on interpreting @mymath{\chi_{red}^2\approx1}, see 
@url{https://arxiv.org/abs/1012.3754, Andrae et al (2010)}.
+For more on interpreting @mymath{\chi_{red}^2\approx1}, see Andrae et al. 
@url{https://arxiv.org/abs/1012.3754,2010}.
 
 The comparison of fitted and input values look pretty good, but nothing beats 
visual inspection!
 To see how this looks compared to the data, let's open the table again:
@@ -26633,7 +26633,7 @@ Without knowing, and thus removing the effect of, this 
value it is impossible to
 
 In astronomy, this reference value is known as the ``Sky'' value: the value 
that noise fluctuates around: where there is no signal from detectable objects 
or artifacts (for example, galaxies, stars, planets or comets, star spikes or 
internal optical ghost).
 Depending on the dataset, the Sky value maybe a fixed value over the whole 
dataset, or it may vary based on location.
-For an example of the latter case, see Figure 11 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+For an example of the latter case, see Figure 11 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 
 Because of the significance of the Sky value in astronomical data analysis, we 
have devoted this subsection to it for a thorough review.
 We start with a thorough discussion on its definition (@ref{Sky value 
definition}).
@@ -26652,7 +26652,7 @@ A more crude (but simpler method) that is usable in 
such situations is discussed
 @subsubsection Sky value definition
 
 @cindex Sky value
-This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+This analysis is taken from Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 Let's assume that all instrument defects -- bias, dark and flat -- have been 
corrected and the magnitude (see @ref{Brightness flux magnitude}) of a detected 
object, @mymath{O}, is desired.
 The sources of flux on pixel@footnote{For this analysis the dimension of the 
data (image) is irrelevant.
 So if the data is an image (2D) with width of @mymath{w} pixels, then a pixel 
located on column @mymath{x} and row @mymath{y} (where all counting starts from 
zero and (0, 0) is located on the bottom left corner of the image), would have 
an index: @mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as 
follows:
@@ -26699,7 +26699,7 @@ With this definition of Sky, the object flux in the 
data can be calculated, per
            O_{i}=T_{i}-S_{i}.}
 
 @cindex photo-electrons
-In the fainter outskirts of an object, a very small fraction of the 
photo-electrons in a pixel actually belongs to objects, the rest is caused by 
random factors (noise), see Figure 1b in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
+In the fainter outskirts of an object, a very small fraction of the 
photo-electrons in a pixel actually belongs to objects, the rest is caused by 
random factors (noise), see Figure 1b in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 Therefore even a small over estimation of the Sky value will result in the 
loss of a very large portion of most galaxies.
 Besides the lost area/brightness, this will also cause an over-estimation of 
the Sky value and thus even more under-estimation of the object's magnitude.
 It is thus very important to detect the diffuse flux of a target, even if they 
are not your primary target.
@@ -26724,7 +26724,7 @@ In particular its detection threshold.
 However, most signal-based detection tools@footnote{According to Akhlaghi and 
Ichikawa (2015), signal-based detection is a detection process that relies 
heavily on assumptions about the to-be-detected objects.
 This method was the most heavily used technique prior to the introduction of 
NoiseChisel in that paper.} use the sky value as a reference to define the 
detection threshold.
 These older techniques therefore had to rely on approximations based on other 
assumptions about the data.
-A review of those other techniques can be seen in Appendix A of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+A review of those other techniques can be seen in Appendix A of Akhlaghi and 
Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
 
 These methods were extensively used in astronomical data analysis for several 
decades, therefore they have given rise to a lot of misconceptions, ambiguities 
and disagreements about the sky value and how to measure it.
 As a summary, the major methods used until now were an approximation of the 
mode of the image pixel distribution and @mymath{\sigma}-clipping.
@@ -26746,7 +26746,7 @@ Therefore, even if it is accurately measured, the mode 
is a biased measure for t
 Another approach was to iteratively clip the brightest pixels in the image 
(which is known as @mymath{\sigma}-clipping).
 See @ref{Sigma clipping} for a complete explanation.
 @mymath{\sigma}-clipping is useful when there are clear outliers (an object 
with a sharp edge in an image for example).
-However, real astronomical objects have diffuse and faint wings that penetrate 
deeply into the noise, see Figure 1 in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
+However, real astronomical objects have diffuse and faint wings that penetrate 
deeply into the noise, see Figure 1 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 @end itemize
 
 As discussed in @ref{Sky value}, the sky value can only be correctly defined 
as the average of undetected pixels.
@@ -26774,7 +26774,7 @@ Noise is mainly due to counting errors in the detector 
electronics upon data col
 @emph{Data} is defined as the combination of signal and noise (so a noisy 
image of a galaxy is one @emph{data}set).
 
 When a dataset does not have any signal (for example, you take an image with a 
closed shutter, producing an image that only contains noise), the mean, median 
and mode of the distribution are equal within statistical errors.
-Signal from emitting objects, like astronomical targets, always has a positive 
value and will never become negative, see Figure 1 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+Signal from emitting objects, like astronomical targets, always has a positive 
value and will never become negative, see Figure 1 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 Therefore, when signal is added to the data (you take an image with an open 
shutter pointing to a galaxy for example), the mean, median and mode of the 
dataset shift to the positive, creating a positively skewed distribution.
 The shift of the mean is the largest.
 The median shifts less, since it is defined after ordering all the 
elements/pixels (the median is the value at a quantile of 0.5), thus it is not 
affected by outliers.
@@ -26802,7 +26802,7 @@ You can read this option as 
``mean-median-quantile-difference''.
 @cindex Skewness
 @cindex Convolution
 The raw dataset's pixel distribution (in each tile) is noisy, to decrease the 
noise/error in estimating @mymath{q_{mean}}, we convolve the image before 
tessellation (see @ref{Convolution process}.
-Convolution decreases the range of the dataset and enhances its skewness, See 
Section 3.1.1 and Figure 4 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+Convolution decreases the range of the dataset and enhances its skewness, See 
Section 3.1.1 and Figure 4 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 This enhanced skewness can be interpreted as an increase in the Signal to 
noise ratio of the objects buried in the noise.
 Therefore, to obtain an even better measure of the presence of signal in a 
tile, the mean and median discussed above are measured on the convolved image.
 
@@ -26810,7 +26810,7 @@ Therefore, to obtain an even better measure of the 
presence of signal in a tile,
 There is one final hurdle: raw astronomical datasets are commonly peppered 
with Cosmic rays.
 Images of Cosmic rays are not smoothed by the atmosphere or telescope 
aperture, so they have sharp boundaries.
 Also, since they do not occupy too many pixels, they do not affect the mode 
and median calculation.
-But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example, see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
+But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example, see Figure 15 in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 The effect of outliers like cosmic rays on the mean and standard deviation can 
be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a 
complete explanation.
 
 Therefore, after asserting that the mean and median are approximately equal in 
a tile (see @ref{Tessellation}), the Sky and its STD are measured on each tile 
after @mymath{\sigma}-clipping with the @option{--sigmaclip} option (see 
@ref{Sigma clipping}).
@@ -27102,7 +27102,7 @@ As the number of elements increases, the mean itself is 
less affected by a small
 @item -O
 @itemx --mode
 Print the mode of all used elements.
-The mode is found through the mirror distribution which is fully described in 
Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015}.
+The mode is found through the mirror distribution which is fully described in 
Appendix C of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
 See that section for a full description.
 
 This mode calculation algorithm is non-parametric, so when the dataset is not 
large enough (larger than about 1000 elements usually), or does not have a 
clear mode it can fail.
@@ -27122,14 +27122,11 @@ In such cases, this option will become handy.
 Print the symmetricity of the calculated mode.
 See the description of @option{--mode} for more.
 This mode algorithm finds the mode based on how symmetric it is, so if the 
symmetricity returned by this option is too low, the mode is not too accurate.
-See Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015} for a full description.
+See Appendix C of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} for a full description.
 In practice, symmetricity values larger than 0.2 are mostly good.
 
 @item --modesymvalue
-Print the value in the distribution where the mirror and input
-distributions are no longer symmetric, see @option{--mode} and Appendix C
-of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} for
-more.
+Print the value in the distribution where the mirror and input distributions 
are no longer symmetric, see @option{--mode} and Appendix C of Akhlaghi and 
Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} for more.
 
 @item  --sigclip-std
 @item  --sigclip-mad
@@ -27293,7 +27290,7 @@ The @mymath{\sigma}-clipping parameters can be set 
through the @option{--sclippa
 
 @item --mirror=FLT
 Make a histogram and cumulative frequency plot of the mirror distribution for 
the given dataset when the mirror is located at the value to this option.
-The mirror distribution is fully described in Appendix C of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and 
currently it is only used to calculate the mode (see @option{--mode}).
+The mirror distribution is fully described in Appendix C of Akhlaghi and 
Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and currently it is only 
used to calculate the mode (see @option{--mode}).
 
 Just note that the mirror distribution is a discrete distribution like the 
input, so while you may give any number as the value to this option, the actual 
mirror value is the closest number in the input dataset to this value.
 If the two numbers are different, Statistics will warn you of the actual 
mirror value used.
@@ -27302,7 +27299,7 @@ This option will make a table as output.
 Depending on your selected name for the output, it will be either a FITS table 
or a plain text table (which is the default).
 It contains three columns: the first is the center of the bins, the second is 
the histogram (with the largest value set to 1) and the third is the normalized 
cumulative frequency plot of the mirror distribution.
 The bins will be positioned such that the mode is on the starting interval of 
one of the bins to make it symmetric around the mirror.
-With this output file and the input histogram (that you can generate in 
another run of Statistics, using the @option{--onebinvalue}), it is possible to 
make plots like Figure 21 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015}.
+With this output file and the input histogram (that you can generate in 
another run of Statistics, using the @option{--onebinvalue}), it is possible to 
make plots like Figure 21 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 
 @end table
 
@@ -27698,7 +27695,7 @@ $ astarithmetic in.fits 100 gt 2 connected-components
 
 Since almost no astronomical target has such sharp edges, we need a more 
advanced detection methodology.
 NoiseChisel uses a new noise-based paradigm for detection of very extended and 
diffuse targets that are drowned deeply in the ocean of noise.
-It was initially introduced in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]} and improvements after the first four were published in 
@url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
+It was initially introduced in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} and improvements after the first 
four were published in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
 Please take the time to go through these papers to most effectively understand 
the need of NoiseChisel and how best to use it.
 
 The name of NoiseChisel is derived from the first thing it does after 
thresholding the dataset: to erode it.
@@ -27718,7 +27715,7 @@ After segmentation, the connected foreground pixels 
will get separate labels, en
 NoiseChisel is only focused on detection (separating signal from noise), to 
@emph{segment} the signal (into separate galaxies for example), Gnuastro has a 
separate specialized program @ref{Segment}.
 NoiseChisel's output can be directly/readily fed into Segment.
 
-For more on NoiseChisel's output format and its benefits (especially in 
conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see 
@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
+For more on NoiseChisel's output format and its benefits (especially in 
conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see 
Akhlaghi @url{https://arxiv.org/abs/1611.06387,2016}.
 Just note that when that paper was published, Segment was not yet spun-off 
into a separate program, and NoiseChisel done both detection and segmentation.
 
 NoiseChisel's output is designed to be generic enough to be easily used in any 
higher-level analysis.
@@ -27755,7 +27752,7 @@ If you have used NoiseChisel within your research, 
please run it with @option{--
 @node NoiseChisel changes after publication, Invoking astnoisechisel, 
NoiseChisel, NoiseChisel
 @subsection NoiseChisel changes after publication
 
-NoiseChisel was initially introduced in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]} and updates after the first four years were 
published in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
+NoiseChisel was initially introduced in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} and updates after the first four 
years were published in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
 To help in understanding how it works, those papers have many figures showing 
every step on multiple mock and real examples.
 We recommended to read these papers for a good understanding of what it does 
and how each parameter influences the output.
 
@@ -27828,7 +27825,7 @@ A convolution kernel can also be optionally given.
 If a value (file name) is given to @option{--kernel} on the command-line or in 
a configuration file (see @ref{Configuration files}), then that file will be 
used to convolve the image prior to thresholding.
 Otherwise a default kernel will be used.
 For a 2D image, the default kernel is a 2D Gaussian with a FWHM of 2 pixels 
truncated at 5 times the FWHM.
-This choice of the default kernel is discussed in Section 3.1.1 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi 
and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}.
 For a 3D cube, it is a Gaussian with FWHM of 1.5 pixels in the first two 
dimensions and 0.75 pixels in the third dimension.
 See @ref{Convolution kernel} for kernel related options.
 Passing @code{none} to @option{--kernel} will disable convolution.
@@ -27999,7 +27996,7 @@ The HDU/extension containing the convolved image in the 
file given to @option{--
 @item -w FITS
 @itemx --widekernel=FITS
 File name of a wider kernel to use in estimating the difference of the mode 
and median in a tile (this difference is used to identify the significance of 
signal in that tile, see @ref{Quantifying signal in a tile}).
-As displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}, a wider kernel will help in identifying the skewness 
caused by data in noise.
+As displayed in Figure 4 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}, a wider kernel will help in 
identifying the skewness caused by data in noise.
 The image that is convolved with this kernel is @emph{only} used for this 
purpose.
 Once the mode is found to be sufficiently close to the median, the quantile 
threshold is found on the image convolved with the sharper kernel 
(@option{--kernel}), see @option{--qthresh}).
 
@@ -28521,7 +28518,7 @@ It is therefore necessary to do a more sophisticated 
segmentation and break up t
 
 Segment will use a detection map and its corresponding dataset to find 
sub-structure over the detected areas and use them for its segmentation.
 Until Gnuastro version 0.6 (released in 2018), Segment was part of 
@ref{NoiseChisel}.
-Therefore, similar to NoiseChisel, the best place to start reading about 
Segment and understanding what it does (with many illustrative figures) is 
Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}, and continue with @url{https://arxiv.org/abs/1909.11230, Akhlaghi 
[2019]}.
+Therefore, similar to NoiseChisel, the best place to start reading about 
Segment and understanding what it does (with many illustrative figures) is 
Section 3.2 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}, and continue with Akhlaghi 
@url{https://arxiv.org/abs/1909.11230,2019}.
 
 @cindex river
 @cindex Watershed algorithm
@@ -28532,11 +28529,11 @@ Using the undetected regions can be disabled by 
directly giving a signal-to-nois
 
 The true clumps are then grown to a certain threshold over the detections.
 Based on the strength of the connections (rivers/watersheds) between the grown 
clumps, they are considered parts of one @emph{object} or as separate 
@emph{object}s.
-See Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa [2015]} for more.
+See Section 3.2 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} for more.
 Segment's main output are thus two labeled datasets: 1) clumps, and 2) objects.
 See @ref{Segment output} for more.
 
-To start learning about Segment, especially in relation to detection 
(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended 
references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}, @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]} and 
@url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
+To start learning about Segment, especially in relation to detection 
(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended 
references are Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}, Akhlaghi 
@url{https://arxiv.org/abs/1611.06387,2016} and Akhlaghi 
@url{https://arxiv.org/abs/1909.11230,2019}.
 If you have used Segment within your research, please run it with 
@option{--cite} to list the papers you should cite and how to acknowledge its 
funding sources.
 
 Those papers cannot be updated any more but the software will evolve.
@@ -28553,7 +28550,7 @@ Finally, in @ref{Invoking astsegment}, we will discuss 
Segment's inputs, outputs
 @c @node Segment changes after publication, Invoking astsegment, Segment, 
Segment
 @c @subsection Segment changes after publication
 
-@c Segment's main algorithm and working strategy were initially defined and 
introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi 
[2019]}.
+@c Segment's main algorithm and working strategy were initially defined and 
introduced in Section 3.2 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} and 
@url{https://arxiv.org/abs/1909.11230,2019}.
 @c It is strongly recommended to read those papers for a good understanding of 
what Segment does, how it relates to detection, and how each parameter 
influences the output.
 @c They have many figures showing every step on multiple mock and real 
examples.
 
@@ -28761,7 +28758,7 @@ It is important for them to be much larger than the 
clumps.
 @subsubsection Segmentation options
 
 The options below can be used to configure every step of the segmentation 
process in the Segment program.
-For a more complete explanation (with figures to demonstrate each step), 
please see Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa [2015]}, and also @ref{Segment}.
+For a more complete explanation (with figures to demonstrate each step), 
please see Section 3.2 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}, and also @ref{Segment}.
 By default, Segment will follow the procedure described in the paper to find 
the S/N threshold based on the noise properties.
 This can be disabled by directly giving a trustable signal-to-noise ratio to 
the @option{--clumpsnthresh} option.
 
@@ -28780,7 +28777,7 @@ Please see the descriptions there for more.
 
 @item --minima
 Build the clumps based on the local minima, not maxima.
-By default, clumps are built starting from local maxima (see Figure 8 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}).
+By default, clumps are built starting from local maxima (see Figure 8 of 
Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015}).
 Therefore, this option can be useful when you are searching for true local 
minima (for example, absorption features).
 
 @item -m INT
@@ -28813,13 +28810,13 @@ Please see the descriptions there for more.
 The quantile of the signal-to-noise ratio distribution of clumps in undetected 
regions, used to define true clumps.
 After identifying all the usable clumps in the undetected regions of the 
dataset, the given quantile of their signal-to-noise ratios is used to define 
the signal-to-noise ratio of a ``true'' clump.
 Effectively, this can be seen as an inverse p-value measure.
-See Figure 9 and Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]} for a complete explanation.
+See Figure 9 and Section 3.2.1 of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015} for a complete explanation.
 The full distribution of clump signal-to-noise ratios over the undetected 
areas can be saved into a table with @option{--checksn} option and visually 
inspected with @option{--checksegmentation}.
 
 @item -v
 @itemx --keepmaxnearriver
 Keep a clump whose maximum (minimum if @option{--minima} is called) flux is 
8-connected to a river pixel.
-By default such clumps over detections are considered to be noise and are 
removed irrespective of their significance measure (see 
@url{https://arxiv.org/abs/1909.11230,Akhlaghi 2019}).
+By default such clumps over detections are considered to be noise and are 
removed irrespective of their significance measure; see Akhlaghi 
@url{https://arxiv.org/abs/1909.11230,2019}.
 Over large profiles, that sink into the noise very slowly, noise can cause 
part of the profile (which was flat without noise) to become a very large and 
with a very high Signal to noise ratio.
 In such cases, the pixel with the maximum flux in the clump will be 
immediately touching a river pixel.
 
@@ -28898,7 +28895,7 @@ To remove these redundant extensions from the output 
(for example, when designin
 The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into 
@ref{MakeCatalog} to generate a catalog for higher-level analysis.
 If you want to treat each clump separately, you can give a very large value 
(or even a NaN, which will always fail) to the @option{--gthresh} option (for 
example, @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation 
options}.
 
-For a complete definition of clumps and objects, please see Section 3.2 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and 
@ref{Segmentation options}.
+For a complete definition of clumps and objects, please see Section 3.2 of 
Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664,2015} and 
@ref{Segmentation options}.
 The clumps are ``true'' local maxima (minima if @option{--minima} is called) 
and their surrounding pixels until a local minimum/maximum (caused by noise 
fluctuations, or another ``true'' clump).
 Therefore it may happen that some of the input detections are not covered by 
clumps at all (very diffuse objects without any strong peak), while some 
objects may contain many clumps.
 Even in those that have clumps, there will be regions that are too diffuse.
@@ -28981,7 +28978,7 @@ For example, the magnitudes, positions and elliptical 
properties of the galaxies
 
 MakeCatalog is Gnuastro's program for localized measurements over a dataset.
 In other words, MakeCatalog is Gnuastro's program to convert low-level 
datasets (like images), to high level catalogs.
-The role of MakeCatalog in a scientific analysis and the benefits of its model 
(where detection/segmentation is separated from measurement) is discussed in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A published 
paper cannot undergo any more change, so this manual is the definitive guide.} 
and summarized in @ref{Detection and catalog production}.
+The role of MakeCatalog in a scientific analysis and the benefits of its model 
(where detection/segmentation is separated from measurement) is discussed in 
Akhlaghi @url{https://arxiv.org/abs/1611.06387v1,2016}@footnote{A published 
paper cannot undergo any more change, so this manual is the definitive guide.} 
and summarized in @ref{Detection and catalog production}.
 We strongly recommend reading this short paper for a better understanding of 
this methodology.
 Understanding the effective usage of MakeCatalog, will thus also help 
effective use of other (lower-level) Gnuastro's programs like @ref{NoiseChisel} 
or @ref{Segment}.
 
@@ -29037,7 +29034,7 @@ For those who feel MakeCatalog's existing 
measurements/columns are not enough an
 Most existing common tools in low-level astronomical data-analysis (for 
example, 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
 However, in light of Gnuastro's modularized approach (modeled on the Unix 
system) detection is separated from measurements and catalog production.
 This modularity is therefore new to many experienced astronomers and deserves 
a short review here.
-Further discussion on the benefits of this methodology can be seen in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
+Further discussion on the benefits of this methodology can be seen in Akhlaghi 
@url{https://arxiv.org/abs/1611.06387v1,2016}.
 
 As discussed in the introduction of @ref{MakeCatalog}, detection (identifying 
which pixels to do measurements on) can be done with different programs.
 Their outputs (a labeled dataset) can be directly fed into MakeCatalog to do 
the measurements and write the result as a catalog/table.
@@ -29483,7 +29480,7 @@ But, @mymath{\Delta{B}/B} is just the inverse of the 
Signal-to-noise ratio (@mym
 @dispmath{ \Delta{M} = {2.5\over{S/N\times ln(10)}} }
 
 MakeCatalog uses this relation to estimate the magnitude errors.
-The signal-to-noise ratio is calculated in different ways for clumps and 
objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}), but this single equation can be used to estimate the measured 
magnitude error afterwards for any type of target.
+The signal-to-noise ratio is calculated in different ways for clumps and 
objects, see Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}), but this single equation can be 
used to estimate the measured magnitude error afterwards for any type of target.
 
 @node Surface brightness error of each detection, Completeness limit of each 
detection, Magnitude measurement error of each detection, Quantifying 
measurement limits
 @subsubsection Surface brightness error of each detection
@@ -29795,7 +29792,7 @@ When the object cannot be represented as an ellipse, 
this interpretation breaks
 @node Adding new columns to MakeCatalog, MakeCatalog measurements, Measuring 
elliptical parameters, MakeCatalog
 @subsection Adding new columns to MakeCatalog
 
-MakeCatalog is designed to allow easy addition of different measurements over 
a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}).
+MakeCatalog is designed to allow easy addition of different measurements over 
a labeled image; see Akhlaghi @url{https://arxiv.org/abs/1611.06387v1,2016}.
 A check-list style description of necessary steps to do that is described in 
this section.
 The common development characteristics of MakeCatalog and other Gnuastro 
programs is explained in @ref{Developing}.
 We strongly encourage you to have a look at that chapter to greatly simplify 
your navigation in the code.
@@ -30213,7 +30210,7 @@ This can be a good measure to see how much you can 
trust the random measurements
 
 @item --river-mean
 [Clumps] The average of the river pixel values around this clump.
-River pixels were defined in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015}.
+River pixels were defined in Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 In short they are the pixels immediately outside of the clumps.
 This value is used internally to find the sum (or magnitude) and signal to 
noise ratio of the clumps.
 It can generally also be used as a scale to gauge the base (ambient) flux 
surrounding the clump.
@@ -32769,7 +32766,7 @@ Measuring the change of distance in a time-dependent 
(expanding) universe only m
 In this geometry, different observers at a given point (event) in space-time 
split up space-time into `space' and `time' in different ways, just like people 
at the same spatial position can make different choices of splitting up a map 
into `left--right' and `up--down'.
 This model is well supported by twentieth and twenty-first century 
observations.}.
 But we can only add bits of space and time together if we measure them in the 
same units: with a conversion constant (similar to how 1000 is used to convert 
a kilometer into meters).
-Experimentally, we find strong support for the hypothesis that this conversion 
constant is the speed of light (or gravitational waves@footnote{The speed of 
gravitational waves was recently found to be very similar to that of light in 
vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a 
vacuum.
+Experimentally, we find strong support for the hypothesis that this conversion 
constant is the speed of light (or gravitational waves@footnote{The speed of 
gravitational waves was recently found to be very similar to that of light in 
vacuum, see LIGO Collaboration @url{https://arxiv.org/abs/1710.05834,2017}.}) 
in a vacuum.
 This speed is postulated to be constant@footnote{In @emph{natural units}, 
speed is measured in units of the speed of light in vacuum.} and is almost 
always written as @mymath{c}.
 We can thus parameterize the change in distance on an expanding 2D surface as
 
@@ -34375,8 +34372,8 @@ When the spacing between pointings is large, they are 
known as an ``offset''.
 A ``pointing'' is used to refer to either a dither or an offset.
 
 @cindex Exposure map
-For example see Figures 3 and 4 of @url{https://arxiv.org/pdf/1305.1931.pdf, 
Illingworth et al. 2013} which show the exposures that went into the XDF survey.
-The pointing pattern can also be large compared to the field of view, for 
example see Figure 1 of @url{https://arxiv.org/pdf/2109.07478.pdf, Trujillo et 
al. 2021}, which show the pointing strategy for the LIGHTS survey.
+For example see Figures 3 and 4 of Illingworth et al. 
@url{https://arxiv.org/pdf/1305.1931.pdf,2013} which show the exposures that 
went into the XDF survey.
+The pointing pattern can also be large compared to the field of view, for 
example see Figure 1 of Trujillo et al. 
@url{https://arxiv.org/pdf/2109.07478.pdf,2021}, which show the pointing 
strategy for the LIGHTS survey.
 These types of images (where each pixel contains the number of exposures, or 
time, that were used in it) are known as exposure maps.
 
 The pointing pattern therefore is strongly defined by the science case 
(high-level purpose of the observation) and your telescope's field of view.
@@ -34591,7 +34588,7 @@ To solve this issue, it is possible to perform some 
transformations of the image
 This is actually what the current script does: it makes some non-linear 
transformations and then uses Gnuastro's ConvertType to generate the color 
image.
 There are several parameters and options in order to change the final output 
that are described in @ref{Invoking astscript-color-faint-gray}.
 A full tutorial describing this script with actual data is available in 
@ref{Color images with full dynamic range}.
-A general overview of this script is published in Infante-Sainz et al. 
@url{https://arxiv.org/abs/2401.03814, 2024}; please cite it if this script 
proves useful in your research.
+A general overview of this script is published in Infante-Sainz et al. 
@url{https://arxiv.org/abs/2401.03814,2024}; please cite it if this script 
proves useful in your research.
 
 @menu
 * Invoking astscript-color-faint-gray::  Details of options and arguments.
@@ -34842,7 +34839,7 @@ Therefore, it is necessary to obtain an empirical 
(non-parametric) and extended
 In this section we describe a set of installed scrips in Gnuastro that will 
let you construct the non-parametric PSF using point-like sources.
 They allow you to derive the PSF from the same astronomical images that the 
science is derived from (without assuming any analytical function).
 
-The scripts are based on the concepts described in Infante-Sainz et al. (2020, 
@url{https://arxiv.org/abs/1911.01430}).
+The scripts are based on the concepts described in Infante-Sainz et al. 
@url{https://arxiv.org/abs/1911.01430,2020}.
 But to be complete, we first give a summary of the logic and overview of their 
combined usage in @ref{Overview of the PSF scripts}.
 Furthermore, before going into the technical details of each script, we 
encourage you to go through the tutorial that is devoted to this at 
@ref{Building the extended PSF}.
 The tutorial uses a real dataset and includes all the logic and reasoning 
behind every step of the usage in every installed script.
@@ -34860,7 +34857,7 @@ The tutorial uses a real dataset and includes all the 
logic and reasoning behind
 @subsection Overview of the PSF scripts
 
 To obtain an extended and non-parametric PSF, several steps are necessary and 
we will go through them here.
-The fundamental ideas of the following methodology are thoroughly described in 
Infante-Sainz et al. (2020, @url{https://arxiv.org/abs/1911.01430}).
+The fundamental ideas of the following methodology are thoroughly described in 
Infante-Sainz et al. @url{https://arxiv.org/abs/1911.01430,2020}.
 A full tutorial is also available in @ref{Building the extended PSF}.
 The tutorial will go through the full process on a pre-selected dataset, but 
will describe the logic behind every step in away that can easily be 
modified/generalized to other datasets.
 
@@ -34898,7 +34895,7 @@ However, these stars will be saturated in the most 
inner part, and immediately o
 Consequently, fainter stars are necessary for the inner regions.
 
 Therefore, you need to repeat the steps above for certain stars (in a certain 
magnitude range) to obtain the PSF in certain radial ranges.
-For example, in Infante-Sainz et al. (2020, 
@url{https://arxiv.org/abs/1911.01430}), the final PSF was constructed from 
three regions (and thus, using stars from three ranges in magnitude).
+For example, in Infante-Sainz et al. 
@url{https://arxiv.org/abs/1911.01430,2020}, the final PSF was constructed from 
three regions (and thus, using stars from three ranges in magnitude).
 In other cases, we even needed four groups of stars!
 But in the example dataset from the tutorial, only two groups are necessary 
(see @ref{Building the extended PSF}).
 
@@ -42524,7 +42521,7 @@ If the dataset doesn't have a numeric type (as in a 
string), this function will
 
 @deftypefun {gal_data_t *} gal_statistics_mode (gal_data_t @code{*input}, 
float @code{mirrordist}, int @code{inplace})
 Return a four-element (@code{double} or @code{float64}) dataset that contains 
the mode of the @code{input} distribution.
-This function implements the non-parametric algorithm to find the mode that is 
described in Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa [2015]}.
+This function implements the non-parametric algorithm to find the mode that is 
described in Appendix C of Akhlaghi and Ichikawa 
@url{https://arxiv.org/abs/1505.01664,2015}.
 
 In short it compares the actual distribution and its ``mirror distribution'' 
to find the mode.
 In order to be efficient, you can determine how far the comparison goes away 
from the mirror through the @code{mirrordist} parameter (think of it as a 
multiple of sigma/error).
@@ -43215,7 +43212,7 @@ Use the watershed algorithm@footnote{The watershed 
algorithm was initially intro
 It starts from the minima and puts the pixels in, one by one, to grow them 
until the touch (create a watershed).
 For more, also see the Wikipedia article: 
@url{https://en.wikipedia.org/wiki/Watershed_%28image_processing%29}.} to 
``over-segment'' the pixels in the @code{indexs} dataset based on values in the 
@code{values} dataset.
 Internally, each local extrema (maximum or minimum, based on @code{min0_max1}) 
and its surrounding pixels will be given a unique label.
-For demonstration, see Figures 8 and 9 of 
@url{http://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+For demonstration, see Figures 8 and 9 of Akhlaghi and Ichikawa 
@url{http://arxiv.org/abs/1505.01664,2015}.
 If @code{topinds!=NULL}, it is assumed to point to an already allocated space 
to write the index of each clump's local extrema, otherwise, it is ignored.
 
 The @code{values} dataset must have a 32-bit floating point type 
(@code{GAL_TYPE_FLOAT32}, see @ref{Library data types}) and will only be read 
by this function.
@@ -43283,14 +43280,14 @@ You can simply allocate (and initialize), such an 
array with the @code{gal_data_
 For example, if the @code{gal_data_t} array is called @code{array}, you can 
pass @code{&array[i]} as @code{sig}.
 
 Along with some other functions in @code{label.h}, this function was initially 
written for @ref{Segment}.
-The description of the parameter used to measure a clump's significance is 
fully given in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
+The description of the parameter used to measure a clump's significance is 
fully given in Akhlaghi @url{https://arxiv.org/abs/1909.11230,2019}.
 
 @end deftypefun
 
 @deftypefun void gal_label_grow_indexs (gal_data_t @code{*labels}, gal_data_t 
@code{*indexs}, int @code{withrivers}, int @code{connectivity})
 Grow the (positive) labels of @code{labels} over the pixels in @code{indexs} 
(see description of @code{gal_label_indexs}).
 The pixels (position in @code{indexs}, values in @code{labels}) that must be 
``grown'' must have a value of @code{GAL_LABEL_INIT} in @code{labels} before 
calling this function.
-For a demonstration see Columns 2 and 3 of Figure 10 in 
@url{http://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+For a demonstration see Columns 2 and 3 of Figure 10 in Akhlaghi and Ichikawa 
@url{http://arxiv.org/abs/1505.01664,2015}.
 
 In many aspects, this function is very similar to over-segmentation (watershed 
algorithm, @code{gal_label_watershed}).
 The big difference is that in over-segmentation local maximums (that are not 
touching any already labeled pixel) get a separate label.
@@ -45294,7 +45291,7 @@ Like C++, Python is object oriented, so as explained 
above, it needs a high leve
 To make things worse, since it is mainly for on-the-go 
programming@footnote{Note that Python is good for fast programming, not fast 
programs.}, it can undergo significant changes.
 One recent example is how Python 2.x and Python 3.x are not compatible.
 Lots of research teams that invested heavily in Python 2.x cannot benefit from 
Python 3.x or future versions any more.
-Some converters are available, but since they are automatic, lots of 
complications might arise in the conversion@footnote{For example see 
@url{https://arxiv.org/abs/1712.00461, Jenness (2017)} which describes how LSST 
is managing the transition.}.
+Some converters are available, but since they are automatic, lots of 
complications might arise in the conversion@footnote{For example see Jenness 
@url{https://arxiv.org/abs/1712.00461,2017}, which describes how LSST is 
managing the transition.}.
 If a research project begins using Python 3.x today, there is no telling how 
compatible their investments will be when Python 4.x or 5.x will come out.
 
 @cindex JVM: Java virtual machine



reply via email to

[Prev in Thread] Current Thread [Next in Thread]