gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 0a348a9: Book: extended implementation of one-


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 0a348a9: Book: extended implementation of one-sentence-per-line
Date: Mon, 16 Sep 2019 20:46:18 -0400 (EDT)

branch: master
commit 0a348a9aeccde92f400856652fcbb224deac44dc
Author: Mohammad Akhlaghi <address@hidden>
Commit: Mohammad Akhlaghi <address@hidden>

    Book: extended implementation of one-sentence-per-line
    
    More of the book source is now converted to this format. All the programs
    are covered and I have reached the library.
---
 doc/gnuastro.texi | 8088 +++++++++++++++++++----------------------------------
 1 file changed, 2821 insertions(+), 5267 deletions(-)

diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 8a56c57..4ae0067 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -11750,12 +11750,8 @@ Alternatively, a value of @code{0} will keep output 
pixels that are even infinit
 @node Data analysis, Modeling and fittings, Data manipulation, Top
 @chapter Data analysis
 
-Astronomical datasets (images or tables) contain very valuable information,
-the tools in this section can help in analyzing, extracting, and
-quantifying that information. For example getting general or specific
-statistics of the dataset (with @ref{Statistics}), detecting signal within
-a noisy dataset (with @ref{NoiseChisel}), or creating a catalog from an
-input dataset (with @ref{MakeCatalog}).
+Astronomical datasets (images or tables) contain very valuable information, 
the tools in this section can help in analyzing, extracting, and quantifying 
that information.
+For example getting general or specific statistics of the dataset (with 
@ref{Statistics}), detecting signal within a noisy dataset (with 
@ref{NoiseChisel}), or creating a catalog from an input dataset (with 
@ref{MakeCatalog}).
 
 @menu
 * Statistics::                  Calculate dataset statistics.
@@ -11768,24 +11764,15 @@ input dataset (with @ref{MakeCatalog}).
 @node Statistics, NoiseChisel, Data analysis, Data analysis
 @section Statistics
 
-The distribution of values in a dataset can provide valuable information
-about it. For example, in an image, if it is a positively skewed
-distribution, we can see that there is significant data in the image. If
-the distribution is roughly symmetric, we can tell that there is no
-significant data in the image. In a table, when we need to select a sample
-of objects, it is important to first get a general view of the whole
-sample.
-
-On the other hand, you might need to know certain statistical parameters of
-the dataset. For example, if we have run a detection algorithm on an image,
-and we want to see how accurate it was, one method is to calculate the
-average of the undetected pixels and see how reasonable it is (if detection
-is done correctly, the average of undetected pixels should be approximately
-equal to the background value, see @ref{Sky value}). In a table, you might
-have calculated the magnitudes of a certain class of objects and want to get
-some general characteristics of the distribution immediately on the
-command-line (very fast!), to possibly change some parameters. The
-Statistics program is designed for such situations.
+The distribution of values in a dataset can provide valuable information about 
it.
+For example, in an image, if it is a positively skewed distribution, we can 
see that there is significant data in the image.
+If the distribution is roughly symmetric, we can tell that there is no 
significant data in the image.
+In a table, when we need to select a sample of objects, it is important to 
first get a general view of the whole sample.
+
+On the other hand, you might need to know certain statistical parameters of 
the dataset.
+For example, if we have run a detection algorithm on an image, and we want to 
see how accurate it was, one method is to calculate the average of the 
undetected pixels and see how reasonable it is (if detection is done correctly, 
the average of undetected pixels should be approximately equal to the 
background value, see @ref{Sky value}).
+In a table, you might have calculated the magnitudes of a certain class of 
objects and want to get some general characteristics of the distribution 
immediately on the command-line (very fast!), to possibly change some 
parameters.
+The Statistics program is designed for such situations.
 
 @menu
 * Histogram and Cumulative Frequency Plot::  Basic definitions.
@@ -11799,112 +11786,67 @@ Statistics program is designed for such situations.
 @subsection Histogram and Cumulative Frequency Plot
 
 @cindex Histogram
-Histograms and the cumulative frequency plots are both used to visually
-study the distribution of a dataset. A histogram shows the number of data
-points which lie within pre-defined intervals (bins). So on the horizontal
-axis we have the bin centers and on the vertical, the number of points that
-are in that bin. You can use it to get a general view of the distribution:
-which values have been repeated the most? how close/far are the most
-significant bins?  Are there more values in the larger part of the range of
-the dataset, or in the lower part?  Similarly, many very important
-properties about the dataset can be deduced from a visual inspection of the
-histogram. In the Statistics program, the histogram can be either output to
-a table to plot with your favorite plotting program@footnote{We recommend
-@url{http://pgfplots.sourceforge.net/,PGFPlots} which generates your plots
-directly within @TeX{} (the same tool that generates your document).}, or
-it can be shown with ASCII characters on the command-line, which is very
-crude, but good enough for a fast and on-the-go analysis, see the example
-in @ref{Invoking aststatistics}.
+Histograms and the cumulative frequency plots are both used to visually study 
the distribution of a dataset.
+A histogram shows the number of data points which lie within pre-defined 
intervals (bins).
+So on the horizontal axis we have the bin centers and on the vertical, the 
number of points that are in that bin.
+You can use it to get a general view of the distribution: which values have 
been repeated the most? how close/far are the most significant bins?  Are there 
more values in the larger part of the range of the dataset, or in the lower 
part?  Similarly, many very important properties about the dataset can be 
deduced from a visual inspection of the histogram.
+In the Statistics program, the histogram can be either output to a table to 
plot with your favorite plotting program@footnote{
+We recommend @url{http://pgfplots.sourceforge.net/,PGFPlots} which generates 
your plots directly within @TeX{} (the same tool that generates your 
document).},
+or it can be shown with ASCII characters on the command-line, which is very 
crude, but good enough for a fast and on-the-go analysis, see the example in 
@ref{Invoking aststatistics}.
 
 @cindex Intervals, histogram
 @cindex Bin width, histogram
 @cindex Normalizing histogram
 @cindex Probability density function
-The width of the bins is only necessary parameter for a histogram. In the
-limiting case that the bin-widths tend to zero (while assuming the number
-of points in the dataset tend to infinity), then the histogram will tend to
-the @url{https://en.wikipedia.org/wiki/Probability_density_function,
-probability density function} of the distribution. When the absolute number
-of points in each bin is not relevant to the study (only the shape of the
-histogram is important), you can @emph{normalize} a histogram so like the
-probability density function, the sum of all its bins will be one.
+The width of the bins is only necessary parameter for a histogram.
+In the limiting case that the bin-widths tend to zero (while assuming the 
number of points in the dataset tend to infinity), then the histogram will tend 
to the @url{https://en.wikipedia.org/wiki/Probability_density_function, 
probability density function} of the distribution.
+When the absolute number of points in each bin is not relevant to the study 
(only the shape of the histogram is important), you can @emph{normalize} a 
histogram so like the probability density function, the sum of all its bins 
will be one.
 
 @cindex Cumulative Frequency Plot
-In the cumulative frequency plot of a distribution, the horizontal axis is
-the sorted data values and the y axis is the index of each data in the
-sorted distribution. Unlike a histogram, a cumulative frequency plot does
-not involve intervals or bins. This makes it less prone to any sort of bias
-or error that a given bin-width would have on the analysis. When a larger
-number of the data points have roughly the same value, then the cumulative
-frequency plot will become steep in that vicinity. This occurs because on
-the horizontal axis, there is little change while on the vertical axis, the
-indexes constantly increase. Normalizing a cumulative frequency plot means
-to divide each index (y axis) by the total number of data points (or the
-last value).
-
-Unlike the histogram which has a limited number of bins, ideally the
-cumulative frequency plot should have one point for every data
-element. Even in small datasets (for example a @mymath{200\times200} image)
-this will result in an unreasonably large number of points to plot (40000)!
-As a result, for practical reasons, it is common to only store its value on
-a certain number of points (intervals) in the input range rather than the
-whole dataset, so you should determine the number of bins you want when
-asking for a cumulative frequency plot. In Gnuastro (and thus the
-Statistics program), the number reported for each bin is the total number
-of data points until the larger interval value for that bin. You can see an
-example histogram and cumulative frequency plot of a single dataset under
-the @option{--asciihist} and @option{--asciicfp} options of @ref{Invoking
-aststatistics}.
-
-So as a summary, both the histogram and cumulative frequency plot in
-Statistics will work with bins. Within each bin/interval, the lower value
-is considered to be within then bin (it is inclusive), but its larger value
-is not (it is exclusive). Formally, an interval/bin between a and b is
-represented by [a, b). When the over-all range of the dataset is specified
-(with the @option{--greaterequal}, @option{--lessthan}, or
-@option{--qrange} options), the acceptable values of the dataset are also
-defined with a similar inclusive-exclusive manner. But when the range is
-determined from the actual dataset (none of these options is called), the
-last element in the dataset is included in the last bin's count.
+In the cumulative frequency plot of a distribution, the horizontal axis is the 
sorted data values and the y axis is the index of each data in the sorted 
distribution.
+Unlike a histogram, a cumulative frequency plot does not involve intervals or 
bins.
+This makes it less prone to any sort of bias or error that a given bin-width 
would have on the analysis.
+When a larger number of the data points have roughly the same value, then the 
cumulative frequency plot will become steep in that vicinity.
+This occurs because on the horizontal axis, there is little change while on 
the vertical axis, the indexes constantly increase.
+Normalizing a cumulative frequency plot means to divide each index (y axis) by 
the total number of data points (or the last value).
+
+Unlike the histogram which has a limited number of bins, ideally the 
cumulative frequency plot should have one point for every data element.
+Even in small datasets (for example a @mymath{200\times200} image) this will 
result in an unreasonably large number of points to plot (40000)! As a result, 
for practical reasons, it is common to only store its value on a certain number 
of points (intervals) in the input range rather than the whole dataset, so you 
should determine the number of bins you want when asking for a cumulative 
frequency plot.
+In Gnuastro (and thus the Statistics program), the number reported for each 
bin is the total number of data points until the larger interval value for that 
bin.
+You can see an example histogram and cumulative frequency plot of a single 
dataset under the @option{--asciihist} and @option{--asciicfp} options of 
@ref{Invoking aststatistics}.
+
+So as a summary, both the histogram and cumulative frequency plot in 
Statistics will work with bins.
+Within each bin/interval, the lower value is considered to be within then bin 
(it is inclusive), but its larger value is not (it is exclusive).
+Formally, an interval/bin between a and b is represented by [a, b).
+When the over-all range of the dataset is specified (with the 
@option{--greaterequal}, @option{--lessthan}, or @option{--qrange} options), 
the acceptable values of the dataset are also defined with a similar 
inclusive-exclusive manner.
+But when the range is determined from the actual dataset (none of these 
options is called), the last element in the dataset is included in the last 
bin's count.
 
 
 @node  Sigma clipping, Sky value, Histogram and Cumulative Frequency Plot, 
Statistics
 @subsection Sigma clipping
 
-Let's assume that you have pure noise (centered on zero) with a clear
-@url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian
-distribution}, or see @ref{Photon counting noise}. Now let's assume you add
-very bright objects (signal) on the image which have a very sharp
-boundary. By a sharp boundary, we mean that there is a clear cutoff (from
-the noise) at the pixels the objects finish. In other words, at their
-boundaries, the objects do not fade away into the noise. In such a case,
-when you plot the histogram (see @ref{Histogram and Cumulative Frequency
-Plot}) of the distribution, the pixels relating to those objects will be
-clearly separate from pixels that belong to parts of the image that did not
-have any signal (were just noise). In the cumulative frequency plot, after
-a steady rise (due to the noise), you would observe a long flat region were
-for a certain range of data (horizontal axis), there is no increase in the
-index (vertical axis).
+Let's assume that you have pure noise (centered on zero) with a clear 
@url{https://en.wikipedia.org/wiki/Normal_distribution,Gaussian distribution}, 
or see @ref{Photon counting noise}.
+Now let's assume you add very bright objects (signal) on the image which have 
a very sharp boundary.
+By a sharp boundary, we mean that there is a clear cutoff (from the noise) at 
the pixels the objects finish.
+In other words, at their boundaries, the objects do not fade away into the 
noise.
+In such a case, when you plot the histogram (see @ref{Histogram and Cumulative 
Frequency Plot}) of the distribution, the pixels relating to those objects will 
be clearly separate from pixels that belong to parts of the image that did not 
have any signal (were just noise).
+In the cumulative frequency plot, after a steady rise (due to the noise), you 
would observe a long flat region were for a certain range of data (horizontal 
axis), there is no increase in the index (vertical axis).
 
 @cindex Blurring
 @cindex Cosmic rays
 @cindex Aperture blurring
 @cindex Atmosphere blurring
-Outliers like the example above can significantly bias the measurement of
-noise statistics. @mymath{\sigma}-clipping is defined as a way to avoid the
-effect of such outliers. In astronomical applications, cosmic rays (when
-they collide at a near normal incidence angle) are a very good example of
-such outliers. The tracks they leave behind in the image are perfectly
-immune to the blurring caused by the atmosphere and the aperture. They are
-also very energetic and so their borders are usually clearly separated from
-the surrounding noise. So @mymath{\sigma}-clipping is very useful in
-removing their effect on the data. See Figure 15 in Akhlaghi and Ichikawa,
-@url{https://arxiv.org/abs/1505.01664,2015}.
-
-@mymath{\sigma}-clipping is defined as the very simple iteration below. In
-each iteration, the range of input data might decrease and so when the
-outliers have the conditions above, the outliers will be removed through
-this iteration. The exit criteria will be discussed below.
+Outliers like the example above can significantly bias the measurement of 
noise statistics.
+@mymath{\sigma}-clipping is defined as a way to avoid the effect of such 
outliers.
+In astronomical applications, cosmic rays (when they collide at a near normal 
incidence angle) are a very good example of such outliers.
+The tracks they leave behind in the image are perfectly immune to the blurring 
caused by the atmosphere and the aperture.
+They are also very energetic and so their borders are usually clearly 
separated from the surrounding noise.
+So @mymath{\sigma}-clipping is very useful in removing their effect on the 
data.
+See Figure 15 in Akhlaghi and Ichikawa, 
@url{https://arxiv.org/abs/1505.01664,2015}.
+
+@mymath{\sigma}-clipping is defined as the very simple iteration below.
+In each iteration, the range of input data might decrease and so when the 
outliers have the conditions above, the outliers will be removed through this 
iteration.
+The exit criteria will be discussed below.
 
 @enumerate
 @item
@@ -11918,49 +11860,36 @@ Go back to step 1, unless the selected exit criteria 
is reached.
 @end enumerate
 
 @noindent
-The reason the median is used as a reference and not the mean is that the
-mean is too significantly affected by the presence of outliers, while the
-median is less affected, see @ref{Quantifying signal in a tile}. As you can
-tell from this algorithm, besides the condition above (that the signal have
-clear high signal to noise boundaries) @mymath{\sigma}-clipping is only
-useful when the signal does not cover more than half of the full data
-set. If they do, then the median will lie over the outliers and
-@mymath{\sigma}-clipping might remove the pixels with no signal.
+The reason the median is used as a reference and not the mean is that the mean 
is too significantly affected by the presence of outliers, while the median is 
less affected, see @ref{Quantifying signal in a tile}.
+As you can tell from this algorithm, besides the condition above (that the 
signal have clear high signal to noise boundaries) @mymath{\sigma}-clipping is 
only useful when the signal does not cover more than half of the full data set.
+If they do, then the median will lie over the outliers and 
@mymath{\sigma}-clipping might remove the pixels with no signal.
 
 There are commonly two exit criteria to stop the @mymath{\sigma}-clipping
 iteration:
 
 @itemize
 @item
-When a certain number of iterations has taken place (second value to the
-@option{--sclipparams} option is larger than 1).
+When a certain number of iterations has taken place (second value to the 
@option{--sclipparams} option is larger than 1).
 @item
-When the new measured standard deviation is within a certain tolerance
-level of the old one (second value to the @option{--sclipparams} option is
-less than 1). The tolerance level is defined by:
+When the new measured standard deviation is within a certain tolerance level 
of the old one (second value to the @option{--sclipparams} option is less than 
1).
+The tolerance level is defined by:
 
 @dispmath{\sigma_{old}-\sigma_{new} \over \sigma_{new}}
 
-The standard deviation is used because it is heavily influenced by the
-presence of outliers. Therefore the fact that it stops changing between two
-iterations is a sign that we have successfully removed outliers. Note that
-in each clipping, the dispersion in the distribution is either less or
-equal. So @mymath{\sigma_{old}\geq\sigma_{new}}.
+The standard deviation is used because it is heavily influenced by the 
presence of outliers.
+Therefore the fact that it stops changing between two iterations is a sign 
that we have successfully removed outliers.
+Note that in each clipping, the dispersion in the distribution is either less 
or equal.
+So @mymath{\sigma_{old}\geq\sigma_{new}}.
 @end itemize
 
 @cartouche
 @noindent
-When working on astronomical images, objects like galaxies and stars are
-blurred by the atmosphere and the telescope aperture, therefore their
-signal sinks into the noise very gradually. Galaxies in particular do not
-appear to have a clear high signal to noise cutoff at all. Therefore
-@mymath{\sigma}-clipping will not be useful in removing their effect on the
-data.
-
-To gauge if @mymath{\sigma}-clipping will be useful for your dataset, look
-at the histogram (see @ref{Histogram and Cumulative Frequency Plot}). The
-ASCII histogram that is printed on the command-line with
-@option{--asciihist} is good enough in most cases.
+When working on astronomical images, objects like galaxies and stars are 
blurred by the atmosphere and the telescope aperture, therefore their signal 
sinks into the noise very gradually.
+Galaxies in particular do not appear to have a clear high signal to noise 
cutoff at all.
+Therefore @mymath{\sigma}-clipping will not be useful in removing their effect 
on the data.
+
+To gauge if @mymath{\sigma}-clipping will be useful for your dataset, look at 
the histogram (see @ref{Histogram and Cumulative Frequency Plot}).
+The ASCII histogram that is printed on the command-line with 
@option{--asciihist} is good enough in most cases.
 @end cartouche
 
 
@@ -11970,33 +11899,19 @@ ASCII histogram that is printed on the command-line 
with
 @subsection Sky value
 
 @cindex Sky
-One of the most important aspects of a dataset is its reference value: the
-value of the dataset where there is no signal. Without knowing, and thus
-removing the effect of, this value it is impossible to compare the derived
-results of many high-level analyses over the dataset with other datasets
-(in the attempt to associate our results with the ``real'' world).
-
-In astronomy, this reference value is known as the ``Sky'' value: the value
-that noise fluctuates around: where there is no signal from detectable
-objects or artifacts (for example galaxies, stars, planets or comets, star
-spikes or internal optical ghost). Depending on the dataset, the Sky value
-maybe a fixed value over the whole dataset, or it may vary based on
-location. For an example of the latter case, see Figure 11 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-Because of the significance of the Sky value in astronomical data analysis,
-we have devoted this subsection to it for a thorough review. We start with
-a thorough discussion on its definition (@ref{Sky value definition}). In
-the astronomical literature, researchers use a variety of methods to
-estimate the Sky value, so in @ref{Sky value misconceptions}) we review
-those and discuss their biases. From the definition of the Sky value, the
-most accurate way to estimate the Sky value is to run a detection algorithm
-(for example @ref{NoiseChisel}) over the dataset and use the undetected
-pixels. However, there is also a more crude method that maybe useful when
-good direct detection is not initially possible (for example due to too
-many cosmic rays in a shallow image). A more crude (but simpler method)
-that is usable in such situations is discussed in @ref{Quantifying signal
-in a tile}.
+One of the most important aspects of a dataset is its reference value: the 
value of the dataset where there is no signal.
+Without knowing, and thus removing the effect of, this value it is impossible 
to compare the derived results of many high-level analyses over the dataset 
with other datasets (in the attempt to associate our results with the ``real'' 
world).
+
+In astronomy, this reference value is known as the ``Sky'' value: the value 
that noise fluctuates around: where there is no signal from detectable objects 
or artifacts (for example galaxies, stars, planets or comets, star spikes or 
internal optical ghost).
+Depending on the dataset, the Sky value maybe a fixed value over the whole 
dataset, or it may vary based on location.
+For an example of the latter case, see Figure 11 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+
+Because of the significance of the Sky value in astronomical data analysis, we 
have devoted this subsection to it for a thorough review.
+We start with a thorough discussion on its definition (@ref{Sky value 
definition}).
+In the astronomical literature, researchers use a variety of methods to 
estimate the Sky value, so in @ref{Sky value misconceptions}) we review those 
and discuss their biases.
+From the definition of the Sky value, the most accurate way to estimate the 
Sky value is to run a detection algorithm (for example @ref{NoiseChisel}) over 
the dataset and use the undetected pixels.
+However, there is also a more crude method that maybe useful when good direct 
detection is not initially possible (for example due to too many cosmic rays in 
a shallow image).
+A more crude (but simpler method) that is usable in such situations is 
discussed in @ref{Quantifying signal in a tile}.
 
 @menu
 * Sky value definition::        Definition of the Sky/reference value.
@@ -12008,17 +11923,10 @@ in a tile}.
 @subsubsection Sky value definition
 
 @cindex Sky value
-This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa (2015)}. Let's assume that all instrument defects -- bias,
-dark and flat -- have been corrected and the brightness (see @ref{Flux
-Brightness and magnitude}) of a detected object, @mymath{O}, is
-desired. The sources of flux on pixel@footnote{For this analysis the
-dimension of the data (image) is irrelevant. So if the data is an image
-(2D) with width of @mymath{w} pixels, then a pixel located on column
-@mymath{x} and row @mymath{y} (where all counting starts from zero and (0,
-0) is located on the bottom left corner of the image), would have an index:
-@mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as
-follows:
+This analysis is taken from @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+Let's assume that all instrument defects -- bias, dark and flat -- have been 
corrected and the brightness (see @ref{Flux Brightness and magnitude}) of a 
detected object, @mymath{O}, is desired.
+The sources of flux on pixel@footnote{For this analysis the dimension of the 
data (image) is irrelevant.
+So if the data is an image (2D) with width of @mymath{w} pixels, then a pixel 
located on column @mymath{x} and row @mymath{y} (where all counting starts from 
zero and (0, 0) is located on the bottom left corner of the image), would have 
an index: @mymath{i=x+y\times{}w}.} @mymath{i} of the image can be written as 
follows:
 
 @itemize
 @item
@@ -12026,15 +11934,13 @@ Contribution from the target object (@mymath{O_i}).
 @item
 Contribution from other detected objects (@mymath{D_i}).
 @item
-Undetected objects or the fainter undetected regions of bright
-objects (@mymath{U_i}).
+Undetected objects or the fainter undetected regions of bright objects 
(@mymath{U_i}).
 @item
 @cindex Cosmic rays
 A cosmic ray (@mymath{C_i}).
 @item
 @cindex Background flux
-The background flux, which is defined to be the count if none of the
-others exists on that pixel (@mymath{B_i}).
+The background flux, which is defined to be the count if none of the others 
exists on that pixel (@mymath{B_i}).
 @end itemize
 @noindent
 The total flux in this pixel (@mymath{T_i}) can thus be written as:
@@ -12043,51 +11949,34 @@ The total flux in this pixel (@mymath{T_i}) can thus 
be written as:
 
 @cindex Cosmic ray removal
 @noindent
-By definition, @mymath{D_i} is detected and it can be assumed that it is
-correctly estimated (deblended) and subtracted, we can thus set
-@mymath{D_i=0}. There are also methods to detect and remove cosmic rays,
-for example the method described in van Dokkum (2001)@footnote{van Dokkum,
-P. G. (2001). Publications of the Astronomical Society of the Pacific.
-113, 1420.}, or by comparing multiple exposures. This allows us to set
-@mymath{C_i=0}. Note that in practice, @mymath{D_i} and @mymath{U_i} are
-correlated, because they both directly depend on the detection algorithm
-and its input parameters. Also note that no detection or cosmic ray removal
-algorithm is perfect. With these limitations in mind, the observed Sky
-value for this pixel (@mymath{S_i}) can be defined as
+By definition, @mymath{D_i} is detected and it can be assumed that it is 
correctly estimated (deblended) and subtracted, we can thus set @mymath{D_i=0}.
+There are also methods to detect and remove cosmic rays, for example the 
method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
+Publications of the Astronomical Society of the Pacific. 113, 1420.}, or by 
comparing multiple exposures.
+This allows us to set @mymath{C_i=0}.
+Note that in practice, @mymath{D_i} and @mymath{U_i} are correlated, because 
they both directly depend on the detection algorithm and its input parameters.
+Also note that no detection or cosmic ray removal algorithm is perfect.
+With these limitations in mind, the observed Sky value for this pixel 
(@mymath{S_i}) can be defined as
 
 @cindex Sky value
 @dispmath{S_i\equiv{}B_i+U_i.}
 
 @noindent
-Therefore, as the detection process (algorithm and input parameters)
-becomes more accurate, or @mymath{U_i\to0}, the Sky value will tend to the
-background value or @mymath{S_i\to B_i}. Hence, we see that while
-@mymath{B_i} is an inherent property of the data (pixel in an image),
-@mymath{S_i} depends on the detection process. Over a group of pixels, for
-example in an image or part of an image, this equation translates to the
-average of undetected pixels (Sky@mymath{=\sum{S_i}}).  With this
-definition of Sky, the object flux in the data can be calculated, per
-pixel, with
+Therefore, as the detection process (algorithm and input parameters) becomes 
more accurate, or @mymath{U_i\to0}, the Sky value will tend to the background 
value or @mymath{S_i\to B_i}.
+Hence, we see that while @mymath{B_i} is an inherent property of the data 
(pixel in an image), @mymath{S_i} depends on the detection process.
+Over a group of pixels, for example in an image or part of an image, this 
equation translates to the average of undetected pixels 
(Sky@mymath{=\sum{S_i}}).
+With this definition of Sky, the object flux in the data can be calculated, 
per pixel, with
 
 @dispmath{ T_{i}=S_{i}+O_{i} \quad\rightarrow\quad
            O_{i}=T_{i}-S_{i}.}
 
 @cindex photo-electrons
-In the fainter outskirts of an object, a very small fraction of the
-photo-electrons in a pixel actually belongs to objects, the rest is caused
-by random factors (noise), see Figure 1b in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-(2015)}. Therefore even a small over estimation of the Sky value will
-result in the loss of a very large portion of most galaxies. Besides the
-lost area/brightness, this will also cause an over-estimation of the Sky
-value and thus even more under-estimation of the object's brightness. It is
-thus very important to detect the diffuse flux of a target, even if they
-are not your primary target.
-
-In summary, the more accurately the Sky is measured, the more accurately
-the brightness (sum of pixel values) of the target object can be measured
-(photometry). Any under/over-estimation in the Sky will directly translate
-to an over/under-estimation of the measured object's brightness.
+In the fainter outskirts of an object, a very small fraction of the 
photo-electrons in a pixel actually belongs to objects, the rest is caused by 
random factors (noise), see Figure 1b in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
+Therefore even a small over estimation of the Sky value will result in the 
loss of a very large portion of most galaxies.
+Besides the lost area/brightness, this will also cause an over-estimation of 
the Sky value and thus even more under-estimation of the object's brightness.
+It is thus very important to detect the diffuse flux of a target, even if they 
are not your primary target.
+
+In summary, the more accurately the Sky is measured, the more accurately the 
brightness (sum of pixel values) of the target object can be measured 
(photometry).
+Any under/over-estimation in the Sky will directly translate to an 
over/under-estimation of the measured object's brightness.
 
 @cartouche
 @noindent
@@ -12101,24 +11990,15 @@ objects (@mymath{D_i} and @mymath{C_i}) have been 
removed from the data.
 @node Sky value misconceptions, Quantifying signal in a tile, Sky value 
definition, Sky value
 @subsubsection Sky value misconceptions
 
-As defined in @ref{Sky value}, the sky value is only accurately defined
-when the detection algorithm is not significantly reliant on the sky
-value. In particular its detection threshold. However, most signal-based
-detection tools@footnote{According to Akhlaghi and Ichikawa (2015),
-signal-based detection is a detection process that relies heavily on
-assumptions about the to-be-detected objects. This method was the most
-heavily used technique prior to the introduction of NoiseChisel in that
-paper.} use the sky value as a reference to define the detection
-threshold. These older techniques therefore had to rely on approximations
-based on other assumptions about the data. A review of those other
-techniques can be seen in Appendix A of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-These methods were extensively used in astronomical data analysis for
-several decades, therefore they have given rise to a lot of misconceptions,
-ambiguities and disagreements about the sky value and how to measure it. As
-a summary, the major methods used until now were an approximation of the
-mode of the image pixel distribution and @mymath{\sigma}-clipping.
+As defined in @ref{Sky value}, the sky value is only accurately defined when 
the detection algorithm is not significantly reliant on the sky value.
+In particular its detection threshold.
+However, most signal-based detection tools@footnote{According to Akhlaghi and 
Ichikawa (2015), signal-based detection is a detection process that relies 
heavily on assumptions about the to-be-detected objects.
+This method was the most heavily used technique prior to the introduction of 
NoiseChisel in that paper.} use the sky value as a reference to define the 
detection threshold.
+These older techniques therefore had to rely on approximations based on other 
assumptions about the data.
+A review of those other techniques can be seen in Appendix A of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+
+These methods were extensively used in astronomical data analysis for several 
decades, therefore they have given rise to a lot of misconceptions, ambiguities 
and disagreements about the sky value and how to measure it.
+As a summary, the major methods used until now were an approximation of the 
mode of the image pixel distribution and @mymath{\sigma}-clipping.
 
 @itemize
 @cindex Histogram
@@ -12126,30 +12006,22 @@ mode of the image pixel distribution and 
@mymath{\sigma}-clipping.
 @cindex Mode of a distribution
 @cindex Probability density function
 @item
-To find the mode of a distribution those methods would either have to
-assume (or find) a certain probability density function (PDF) or use the
-histogram. But astronomical datasets can have any distribution, making it
-almost impossible to define a generic function. Also, histogram-based
-results are very inaccurate (there is a large dispersion) and it depends on
-the histogram bin-widths. Generally, the mode of a distribution also shifts
-as signal is added. Therefore, even if it is accurately measured, the mode
-is a biased measure for the Sky value.
+To find the mode of a distribution those methods would either have to assume 
(or find) a certain probability density function (PDF) or use the histogram.
+But astronomical datasets can have any distribution, making it almost 
impossible to define a generic function.
+Also, histogram-based results are very inaccurate (there is a large 
dispersion) and it depends on the histogram bin-widths.
+Generally, the mode of a distribution also shifts as signal is added.
+Therefore, even if it is accurately measured, the mode is a biased measure for 
the Sky value.
 
 @cindex Sigma-clipping
 @item
-Another approach was to iteratively clip the brightest pixels in the image
-(which is known as @mymath{\sigma}-clipping). See @ref{Sigma clipping} for
-a complete explanation. @mymath{\sigma}-clipping is useful when there are
-clear outliers (an object with a sharp edge in an image for
-example). However, real astronomical objects have diffuse and faint wings
-that penetrate deeply into the noise, see Figure 1 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+Another approach was to iteratively clip the brightest pixels in the image 
(which is known as @mymath{\sigma}-clipping).
+See @ref{Sigma clipping} for a complete explanation.
+@mymath{\sigma}-clipping is useful when there are clear outliers (an object 
with a sharp edge in an image for example).
+However, real astronomical objects have diffuse and faint wings that penetrate 
deeply into the noise, see Figure 1 in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa (2015)}.
 @end itemize
 
-As discussed in @ref{Sky value}, the sky value can only be correctly
-defined as the average of undetected pixels. Therefore all such
-approaches that try to approximate the sky value prior to detection
-are ultimately poor approximations.
+As discussed in @ref{Sky value}, the sky value can only be correctly defined 
as the average of undetected pixels.
+Therefore all such approaches that try to approximate the sky value prior to 
detection are ultimately poor approximations.
 
 
 
@@ -12160,136 +12032,81 @@ are ultimately poor approximations.
 @cindex Noise
 @cindex Signal
 @cindex Gaussian distribution
-Put simply, noise can be characterized with a certain spread about the
-measured value. In the Gaussian distribution (most commonly used to model
-noise) the spread is defined by the standard deviation about the
-characteristic mean.
-
-Let's start by clarifying some definitions first: @emph{Data} is defined as
-the combination of signal and noise (so a noisy image is one
-@emph{data}set). @emph{Signal} is defined as the mean of the noise on each
-element. We'll also assume that the @emph{background} (see @ref{Sky value
-definition}) is subtracted and is zero.
-
-When a data set doesn't have any signal (only noise), the mean, median and
-mode of the distribution are equal within statistical errors and
-approximately equal to the background value.  Signal always has a positive
-value and will never become negative, see Figure 1 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-(2015)}. Therefore, as more signal is added, the mean, median and mode of
-the dataset shift to the positive. The mean's shift is the largest. The
-median shifts less, since it is defined based on an ordered distribution
-and so is not affected by a small number of outliers. The distribution's
-mode shifts the least to the positive.
+Put simply, noise can be characterized with a certain spread about the 
measured value.
+In the Gaussian distribution (most commonly used to model noise) the spread is 
defined by the standard deviation about the characteristic mean.
+
+Let's start by clarifying some definitions first: @emph{Data} is defined as 
the combination of signal and noise (so a noisy image is one @emph{data}set).
+@emph{Signal} is defined as the mean of the noise on each element.
+We'll also assume that the @emph{background} (see @ref{Sky value definition}) 
is subtracted and is zero.
+
+When a data set doesn't have any signal (only noise), the mean, median and 
mode of the distribution are equal within statistical errors and approximately 
equal to the background value.
+Signal always has a positive value and will never become negative, see Figure 
1 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
+Therefore, as more signal is added, the mean, median and mode of the dataset 
shift to the positive.
+The mean's shift is the largest.
+The median shifts less, since it is defined based on an ordered distribution 
and so is not affected by a small number of outliers.
+The distribution's mode shifts the least to the positive.
 
 @cindex Mode
 @cindex Median
-Inverting the argument above gives us a robust method to quantify the
-significance of signal in a dataset. Namely, when the mean and median of a
-distribution are approximately equal, or the mean's quantile is around 0.5,
-we can argue that there is no significant signal.
-
-To allow for gradients (which are commonly present in ground-based images),
-we can consider the image to be made of a grid of tiles (see
-@ref{Tessellation}@footnote{The options to customize the tessellation are
-discussed in @ref{Processing options}.}). Hence, from the difference of the
-mean and median on each tile, we can estimate the significance of signal in
-it. The median of a distribution is defined to be the value of the
-distribution's middle point after sorting (or 0.5 quantile). Thus, to
-estimate the presence of signal, we'll compare with the quantile of the
-mean with 0.5. If the absolute difference in a tile is larger than the
-value given to the @option{--meanmedqdiff} option, that tile will be
-ignored. You can read this option as ``mean-median-quantile-difference''.
-
-The raw dataset's distribution is noisy, so using the argument above on the
-raw input will give a noisy result. To decrease the noise/error in
-estimating the mode, we will use convolution (see @ref{Convolution
-process}). Convolution decreases the range of the dataset and enhances its
-skewness, See Section 3.1.1 and Figure 4 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}. This
-enhanced skewness can be interpreted as an increase in the Signal to noise
-ratio of the objects buried in the noise. Therefore, to obtain an even
-better measure of the presence of signal in a tile, the mean and median
-discussed above are measured on the convolved image.
-
-Through the difference of the mean and median we have actually `detected'
-data in the distribution. However this ``detection'' was only based on the
-total distribution of the data in each tile (a much lower resolution). This
-is the main limitation of this technique. The best approach is thus to do
-detection over the dataset, mask all the detected pixels and use the
-undetected regions to estimate the sky and its standard deviation (possibly
-over a tessellation). This is how NoiseChisel works: it uses the argument
-above to find tiles that are used to find its thresholds. Several
-higher-level steps are done on the thresholded pixels to define the
-higher-level detections (see @ref{NoiseChisel}).
+Inverting the argument above gives us a robust method to quantify the 
significance of signal in a dataset.
+Namely, when the mean and median of a distribution are approximately equal, or 
the mean's quantile is around 0.5, we can argue that there is no significant 
signal.
+
+To allow for gradients (which are commonly present in ground-based images), we 
can consider the image to be made of a grid of tiles (see 
@ref{Tessellation}@footnote{The options to customize the tessellation are 
discussed in @ref{Processing options}.}).
+Hence, from the difference of the mean and median on each tile, we can 
estimate the significance of signal in it.
+The median of a distribution is defined to be the value of the distribution's 
middle point after sorting (or 0.5 quantile).
+Thus, to estimate the presence of signal, we'll compare with the quantile of 
the mean with 0.5.
+If the absolute difference in a tile is larger than the value given to the 
@option{--meanmedqdiff} option, that tile will be ignored.
+You can read this option as ``mean-median-quantile-difference''.
+
+The raw dataset's distribution is noisy, so using the argument above on the 
raw input will give a noisy result.
+To decrease the noise/error in estimating the mode, we will use convolution 
(see @ref{Convolution process}).
+Convolution decreases the range of the dataset and enhances its skewness, See 
Section 3.1.1 and Figure 4 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa (2015)}.
+This enhanced skewness can be interpreted as an increase in the Signal to 
noise ratio of the objects buried in the noise.
+Therefore, to obtain an even better measure of the presence of signal in a 
tile, the mean and median discussed above are measured on the convolved image.
+
+Through the difference of the mean and median we have actually `detected' data 
in the distribution.
+However this ``detection'' was only based on the total distribution of the 
data in each tile (a much lower resolution).
+This is the main limitation of this technique.
+The best approach is thus to do detection over the dataset, mask all the 
detected pixels and use the undetected regions to estimate the sky and its 
standard deviation (possibly over a tessellation).
+This is how NoiseChisel works: it uses the argument above to find tiles that 
are used to find its thresholds.
+Several higher-level steps are done on the thresholded pixels to define the 
higher-level detections (see @ref{NoiseChisel}).
 
 @cindex Cosmic rays
-There is one final hurdle: raw astronomical datasets are commonly peppered
-with Cosmic rays. Images of Cosmic rays aren't smoothed by the atmosphere
-or telescope aperture, so they have sharp boundaries. Also, since they
-don't occupy too many pixels, they don't affect the mode and median
-calculation. But their very high values can greatly bias the calculation of
-the mean (recall how the mean shifts the fastest in the presence of
-outliers), for example see Figure 15 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
-
-The effect of outliers like cosmic rays on the mean and standard deviation
-can be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping}
-for a complete explanation. Therefore, after asserting that the mode and
-median are approximately equal in a tile (see @ref{Tessellation}), the
-final Sky value and its standard deviation are determined after
-@mymath{\sigma}-clipping with the @option{--sigmaclip} option.
-
-In the end, some of the tiles will pass the mean and median quantile
-difference test. However, prior to interpolating over the failed tiles,
-another point should be considered: large and extended galaxies, or bright
-stars, have wings which sink into the noise very gradually. In some cases,
-the gradient over these wings can be on scales that is larger than the
-tiles. The mean-median distance test will pass on such tiles and will cause
-a strong peak in the interpolated tile grid, see @ref{Detecting large
-extended targets}.
-
-The tiles that exist over the wings of large galaxies or bright stars are
-outliers in the distribution of tiles that passed the mean-median quantile
-distance test. Therefore, the final step of ``quantifying signal in a
-tile'' is to look at this distribution and remove the
-outliers. @mymath{\sigma}-clipping is a good solution for removing a few
-outliers, but the problem with outliers of this kind is that there may be
-many such tiles (depending on the large/bright stars/galaxies in the
-image). Therefore a novel outlier rejection algorithm will be used.
+There is one final hurdle: raw astronomical datasets are commonly peppered 
with Cosmic rays.
+Images of Cosmic rays aren't smoothed by the atmosphere or telescope aperture, 
so they have sharp boundaries.
+Also, since they don't occupy too many pixels, they don't affect the mode and 
median calculation.
+But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
+
+The effect of outliers like cosmic rays on the mean and standard deviation can 
be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a 
complete explanation.
+Therefore, after asserting that the mode and median are approximately equal in 
a tile (see @ref{Tessellation}), the final Sky value and its standard deviation 
are determined after @mymath{\sigma}-clipping with the @option{--sigmaclip} 
option.
+
+In the end, some of the tiles will pass the mean and median quantile 
difference test.
+However, prior to interpolating over the failed tiles, another point should be 
considered: large and extended galaxies, or bright stars, have wings which sink 
into the noise very gradually.
+In some cases, the gradient over these wings can be on scales that is larger 
than the tiles.
+The mean-median distance test will pass on such tiles and will cause a strong 
peak in the interpolated tile grid, see @ref{Detecting large extended targets}.
+
+The tiles that exist over the wings of large galaxies or bright stars are 
outliers in the distribution of tiles that passed the mean-median quantile 
distance test.
+Therefore, the final step of ``quantifying signal in a tile'' is to look at 
this distribution and remove the outliers.
+@mymath{\sigma}-clipping is a good solution for removing a few outliers, but 
the problem with outliers of this kind is that there may be many such tiles 
(depending on the large/bright stars/galaxies in the image).
+Therefore a novel outlier rejection algorithm will be used.
 
 @cindex Outliers
 @cindex Identifying outliers
-To identify the first outlier, we'll use the distribution of distances
-between sorted elements. If there are @mymath{N} successful tiles, for
-every tile, the distance between the adjacent @mymath{N/2} previous
-elements is found, giving a distribution containing @mymath{N/2-1}
-points. The @mymath{\sigma}-clipped median and standard deviation of this
-distribution is then found (@mymath{\sigma}-clipping is configured with
-@option{--outliersclip}). Finally, if the distance between the element and
-its previous element is more than @option{--outliersigma} multiples of the
-@mymath{\sigma}-clipped standard deviation added with the
-@mymath{\sigma}-clipped median, that element is considered an outlier and
-all tiles larger than that value are ignored.
-
-Formally, if we assume there are @mymath{N} elements. They are first
-sorted. Searching for the outlier starts on element @mymath{N/2} (integer
-division). Let's take @mymath{v_i} to be the @mymath{i}-th element of the
-sorted input (with no blank values) and @mymath{m} and @mymath{\sigma} as
-the @mymath{\sigma}-clipped median and standard deviation from the
-distances of the previous @mymath{N/2-1} elements (not including
-@mymath{v_i}). If the value given to @option{--outliersigma} is displayed
-with @mymath{s}, the @mymath{i}-th element is considered as an outlier when
-the condition below is true.
+To identify the first outlier, we'll use the distribution of distances between 
sorted elements.
+If there are @mymath{N} successful tiles, for every tile, the distance between 
the adjacent @mymath{N/2} previous elements is found, giving a distribution 
containing @mymath{N/2-1} points.
+The @mymath{\sigma}-clipped median and standard deviation of this distribution 
is then found (@mymath{\sigma}-clipping is configured with 
@option{--outliersclip}).
+Finally, if the distance between the element and its previous element is more 
than @option{--outliersigma} multiples of the @mymath{\sigma}-clipped standard 
deviation added with the @mymath{\sigma}-clipped median, that element is 
considered an outlier and all tiles larger than that value are ignored.
+
+Formally, if we assume there are @mymath{N} elements.
+They are first sorted.
+Searching for the outlier starts on element @mymath{N/2} (integer division).
+Let's take @mymath{v_i} to be the @mymath{i}-th element of the sorted input 
(with no blank values) and @mymath{m} and @mymath{\sigma} as the 
@mymath{\sigma}-clipped median and standard deviation from the distances of the 
previous @mymath{N/2-1} elements (not including @mymath{v_i}).
+If the value given to @option{--outliersigma} is displayed with @mymath{s}, 
the @mymath{i}-th element is considered as an outlier when the condition below 
is true.
 
 @dispmath{{(v_i-v_{i-1})-m\over \sigma}>s}
 
-Since @mymath{i} begins from the median, the outlier has to be larger than
-the median. You can use the check images (for example @option{--checksky}
-in the Statistics program or @option{--checkqthresh},
-@option{--checkdetsky} and @option{--checksky} options in NoiseChisel for
-any of its steps that uses this outlier rejection) to inspect the steps and
-see which tiles have been discarded as outliers prior to interpolation.
+Since @mymath{i} begins from the median, the outlier has to be larger than the 
median.
+You can use the check images (for example @option{--checksky} in the 
Statistics program or @option{--checkqthresh}, @option{--checkdetsky} and 
@option{--checksky} options in NoiseChisel for any of its steps that uses this 
outlier rejection) to inspect the steps and see which tiles have been discarded 
as outliers prior to interpolation.
 
 
 
@@ -12304,9 +12121,8 @@ see which tiles have been discarded as outliers prior 
to interpolation.
 @node Invoking aststatistics,  , Sky value, Statistics
 @subsection Invoking Statistics
 
-Statistics will print statistical measures of an input dataset (table
-column or image). The executable name is @file{aststatistics} with the
-following general template
+Statistics will print statistical measures of an input dataset (table column 
or image).
+The executable name is @file{aststatistics} with the following general template
 
 @example
 $ aststatistics [OPTION ...] InputImage.fits
@@ -12343,94 +12159,66 @@ $ awk '($1+$2)/2 > 5 @{print $3@}' table.txt | 
aststatistics --median
 
 @noindent
 @cindex Standard input
-Statistics can take its input dataset either from a file (image or table)
-or the Standard input (see @ref{Standard input}). If any output file is to
-be created, the value to the @option{--output} option, is used as the base
-name for the generated files. Without @option{--output}, the input name
-will be used to generate an output name, see @ref{Automatic output}. The
-options described below are particular to Statistics, but for general
-operations, it shares a large collection of options with the other Gnuastro
-programs, see @ref{Common options} for the full list. For more on reading
-from standard input, please see the description of @code{--stdintimeout}
-option in @ref{Input output options}. Options can also be given in
-configuration files, for more, please see @ref{Configuration files}.
-
-The input dataset may have blank values (see @ref{Blank pixels}), in this
-case, all blank pixels are ignored during the calculation. Initially, the
-full dataset will be read, but it is possible to select a specific range of
-data elements to use in the analysis of each run. You can either directly
-specify a minimum and maximum value for the range of data elements to use
-(with @option{--greaterequal} or @option{--lessthan}), or specify the range
-using quantiles (with @option{--qrange}). If a range is specified, all
-pixels outside of it are ignored before any processing.
-
-The following set of options are for specifying the input/outputs of
-Statistics. There are many other input/output options that are common to
-all Gnuastro programs including Statistics, see @ref{Input output options}
-for those.
+Statistics can take its input dataset either from a file (image or table) or 
the Standard input (see @ref{Standard input}).
+If any output file is to be created, the value to the @option{--output} 
option, is used as the base name for the generated files.
+Without @option{--output}, the input name will be used to generate an output 
name, see @ref{Automatic output}.
+The options described below are particular to Statistics, but for general 
operations, it shares a large collection of options with the other Gnuastro 
programs, see @ref{Common options} for the full list.
+For more on reading from standard input, please see the description of 
@code{--stdintimeout} option in @ref{Input output options}.
+Options can also be given in configuration files, for more, please see 
@ref{Configuration files}.
+
+The input dataset may have blank values (see @ref{Blank pixels}), in this 
case, all blank pixels are ignored during the calculation.
+Initially, the full dataset will be read, but it is possible to select a 
specific range of data elements to use in the analysis of each run.
+You can either directly specify a minimum and maximum value for the range of 
data elements to use (with @option{--greaterequal} or @option{--lessthan}), or 
specify the range using quantiles (with @option{--qrange}).
+If a range is specified, all pixels outside of it are ignored before any 
processing.
+
+The following set of options are for specifying the input/outputs of 
Statistics.
+There are many other input/output options that are common to all Gnuastro 
programs including Statistics, see @ref{Input output options} for those.
 
 @table @option
 
 @item -c STR/INT
 @itemx --column=STR/INT
-The column to use when the input file is a table with more than one
-column. See @ref{Selecting table columns} for a full description of how to
-use this option. For more on how tables are read in Gnuastro, please see
-@ref{Tables}.
+The column to use when the input file is a table with more than one column.
+See @ref{Selecting table columns} for a full description of how to use this 
option.
+For more on how tables are read in Gnuastro, please see @ref{Tables}.
 
 @item -r STR/INT
 @itemx --refcol=STR/INT
-The reference column selector when the input file is a table. When a
-reference column is given, the range options below will be applied to this
-column and only elements in the input column that have a reference value in
-the correct range will be used. In practice this option allows you to
-select a subset of the input column based on values in another (the
-reference) column. All the statistical calculations will be done on the
-selected input column, not the reference column.
+The reference column selector when the input file is a table.
+When a reference column is given, the range options below will be applied to 
this column and only elements in the input column that have a reference value 
in the correct range will be used.
+In practice this option allows you to select a subset of the input column 
based on values in another (the reference) column.
+All the statistical calculations will be done on the selected input column, 
not the reference column.
 
 @item -g FLT
 @itemx --greaterequal=FLT
-Limit the range of inputs into those with values greater and equal to what
-is given to this option. None of the values below this value will be used
-in any of the processing steps below.
+Limit the range of inputs into those with values greater and equal to what is 
given to this option.
+None of the values below this value will be used in any of the processing 
steps below.
 
 @item -l FLT
 @itemx --lessthan=FLT
-Limit the range of inputs into those with values less-than what is given to
-this option. None of the values greater or equal to this value will be used
-in any of the processing steps below.
+Limit the range of inputs into those with values less-than what is given to 
this option.
+None of the values greater or equal to this value will be used in any of the 
processing steps below.
 
 @item -Q FLT[,FLT]
 @itemx --qrange=FLT[,FLT]
-Specify the range of usable inputs using the quantile. This option can take
-one or two quantiles to specify the range. When only one number is input
-(let's call it @mymath{Q}), the range will be those values in the quantile
-range @mymath{Q} to @mymath{1-Q}. So when only one value is given, it must
-be less than 0.5. When two values are given, the first is used as the lower
-quantile range and the second is used as the larger quantile range.
+Specify the range of usable inputs using the quantile.
+This option can take one or two quantiles to specify the range.
+When only one number is input (let's call it @mymath{Q}), the range will be 
those values in the quantile range @mymath{Q} to @mymath{1-Q}.
+So when only one value is given, it must be less than 0.5.
+When two values are given, the first is used as the lower quantile range and 
the second is used as the larger quantile range.
 
 @cindex Quantile
-The quantile of a given element in a dataset is defined by the fraction of
-its index to the total number of values in the sorted input array. So the
-smallest and largest values in the dataset have a quantile of 0.0 and
-1.0. The quantile is a very useful non-parametric (making no assumptions
-about the input) relative measure to specify a range. It can best be
-understood in terms of the cumulative frequency plot, see @ref{Histogram
-and Cumulative Frequency Plot}. The quantile of each horizontal axis value
-in the cumulative frequency plot is the vertical axis value associate with
-it.
+The quantile of a given element in a dataset is defined by the fraction of its 
index to the total number of values in the sorted input array.
+So the smallest and largest values in the dataset have a quantile of 0.0 and 
1.0.
+The quantile is a very useful non-parametric (making no assumptions about the 
input) relative measure to specify a range.
+It can best be understood in terms of the cumulative frequency plot, see 
@ref{Histogram and Cumulative Frequency Plot}.
+The quantile of each horizontal axis value in the cumulative frequency plot is 
the vertical axis value associate with it.
 
 @end table
 
 @cindex ASCII plot
-When no operation is requested, Statistics will print some general basic
-properties of the input dataset on the command-line like the example below
-(ran on one of the output images of @command{make check}@footnote{You can
-try it by running the command in the @file{tests} directory, open the image
-with a FITS viewer and have a look at it to get a sense of how these
-statistics relate to the input image/dataset.}). This default behavior is
-designed to help give you a general feeling of how the data are distributed
-and help in narrowing down your analysis.
+When no operation is requested, Statistics will print some general basic 
properties of the input dataset on the command-line like the example below (ran 
on one of the output images of @command{make check}@footnote{You can try it by 
running the command in the @file{tests} directory, open the image with a FITS 
viewer and have a look at it to get a sense of how these statistics relate to 
the input image/dataset.}).
+This default behavior is designed to help give you a general feeling of how 
the data are distributed and help in narrowing down your analysis.
 
 @example
 $ aststatistics convolve_spatial_scaled_noised.fits     \
@@ -12465,29 +12253,19 @@ Histogram:
  |-----------------------------------------------------------------
 @end example
 
-Gnuastro's Statistics is a very general purpose program, so to be able to
-easily understand this diversity in its operations (and how to possibly run
-them together), we'll divided the operations into two types: those that
-don't respect the position of the elements and those that do (by
-tessellating the input on a tile grid, see @ref{Tessellation}). The former
-treat the whole dataset as one and can re-arrange all the elements (for
-example sort them), but the former do their processing on each tile
-independently. First, we'll review the operations that work on the whole
-dataset.
+Gnuastro's Statistics is a very general purpose program, so to be able to 
easily understand this diversity in its operations (and how to possibly run 
them together), we'll divided the operations into two types: those that don't 
respect the position of the elements and those that do (by tessellating the 
input on a tile grid, see @ref{Tessellation}).
+The former treat the whole dataset as one and can re-arrange all the elements 
(for example sort them), but the former do their processing on each tile 
independently.
+First, we'll review the operations that work on the whole dataset.
 
 @cindex AWK
 @cindex GNU AWK
-The group of options below can be used to get single value measurement(s)
-of the whole dataset. They will print only the requested value as one field
-in a line/row, like the @option{--mean}, @option{--median} options. These
-options can be called any number of times and in any order. The outputs of
-all such options will be printed on one line following each other (with a
-space character between them). This feature makes these options very useful
-in scripts, or to redirect into programs like GNU AWK for higher-level
-processing. These are some of the most basic measures, Gnuastro is still
-under heavy development and this list will grow. If you want another
-statistical parameter, please contact us and we will do out best to add it
-to this list, see @ref{Suggest new feature}.
+The group of options below can be used to get single value measurement(s) of 
the whole dataset.
+They will print only the requested value as one field in a line/row, like the 
@option{--mean}, @option{--median} options.
+These options can be called any number of times and in any order.
+The outputs of all such options will be printed on one line following each 
other (with a space character between them).
+This feature makes these options very useful in scripts, or to redirect into 
programs like GNU AWK for higher-level processing.
+These are some of the most basic measures, Gnuastro is still under heavy 
development and this list will grow.
+If you want another statistical parameter, please contact us and we will do 
out best to add it to this list, see @ref{Suggest new feature}.
 
 @table @option
 
@@ -12518,62 +12296,46 @@ Print the median of all used elements.
 
 @item -u FLT[,FLT[,...]]
 @itemx --quantile=FLT[,FLT[,...]]
-Print the values at the given quantiles of the input dataset. Any number of
-quantiles may be given and one number will be printed for each. Values can
-either be written as a single number or as fractions, but must be between
-zero and one (inclusive). Hence, in effect @command{--quantile=0.25
---quantile=0.75} is equivalent to @option{--quantile=0.25,3/4}, or
-@option{-u1/4,3/4}.
-
-The returned value is one of the elements from the dataset. Taking
-@mymath{q} to be your desired quantile, and @mymath{N} to be the total
-number of used (non-blank and within the given range) elements, the
-returned value is at the following position in the sorted array:
-@mymath{round(q\times{}N}).
+Print the values at the given quantiles of the input dataset.
+Any number of quantiles may be given and one number will be printed for each.
+Values can either be written as a single number or as fractions, but must be 
between zero and one (inclusive).
+Hence, in effect @command{--quantile=0.25 --quantile=0.75} is equivalent to 
@option{--quantile=0.25,3/4}, or @option{-u1/4,3/4}.
+
+The returned value is one of the elements from the dataset.
+Taking @mymath{q} to be your desired quantile, and @mymath{N} to be the total 
number of used (non-blank and within the given range) elements, the returned 
value is at the following position in the sorted array: 
@mymath{round(q\times{}N}).
 
 @item --quantfunc=FLT[,FLT[,...]]
-Print the quantiles of the given values in the dataset. This option is the
-inverse of the @option{--quantile} and operates similarly except that the
-acceptable values are within the range of the dataset, not between 0 and
-1. Formally it is known as the ``Quantile function''.
+Print the quantiles of the given values in the dataset.
+This option is the inverse of the @option{--quantile} and operates similarly 
except that the acceptable values are within the range of the dataset, not 
between 0 and 1.
+Formally it is known as the ``Quantile function''.
 
-Since the dataset is not continuous this function will find the nearest
-element of the dataset and use its position to estimate the quantile
-function.
+Since the dataset is not continuous this function will find the nearest 
element of the dataset and use its position to estimate the quantile function.
 
 @item -O
 @itemx --mode
-Print the mode of all used elements. The mode is found through the mirror
-distribution which is fully described in Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}. See
-that section for a full description.
-
-This mode calculation algorithm is non-parametric, so when the dataset is
-not large enough (larger than about 1000 elements usually), or doesn't have
-a clear mode it can fail. In such cases, this option will return a value of
-@code{nan} (for the floating point NaN value).
-
-As described in that paper, the easiest way to assess the quality of this
-mode calculation method is to use it's symmetricity (see @option{--modesym}
-below). A better way would be to use the @option{--mirror} option to
-generate the histogram and cumulative frequency tables for any given mirror
-value (the mode in this case) as a table. If you generate plots like those
-shown in Figure 21 of that paper, then your mode is accurate.
+Print the mode of all used elements.
+The mode is found through the mirror distribution which is fully described in 
Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015}.
+See that section for a full description.
+
+This mode calculation algorithm is non-parametric, so when the dataset is not 
large enough (larger than about 1000 elements usually), or doesn't have a clear 
mode it can fail.
+In such cases, this option will return a value of @code{nan} (for the floating 
point NaN value).
+
+As described in that paper, the easiest way to assess the quality of this mode 
calculation method is to use it's symmetricity (see @option{--modesym} below).
+A better way would be to use the @option{--mirror} option to generate the 
histogram and cumulative frequency tables for any given mirror value (the mode 
in this case) as a table.
+If you generate plots like those shown in Figure 21 of that paper, then your 
mode is accurate.
 
 @item --modequant
-Print the quantile of the mode. You can get the actual mode value from the
-@option{--mode} described above. In many cases, the absolute value of the
-mode is irrelevant, but its position within the distribution is
-important. In such cases, this option will become handy.
+Print the quantile of the mode.
+You can get the actual mode value from the @option{--mode} described above.
+In many cases, the absolute value of the mode is irrelevant, but its position 
within the distribution is important.
+In such cases, this option will become handy.
 
 @item --modesym
-Print the symmetricity of the calculated mode. See the description of
-@option{--mode} for more. This mode algorithm finds the mode based on how
-symmetric it is, so if the symmetricity returned by this option is too low,
-the mode is not too accurate. See Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} for a
-full description. In practice, symmetricity values larger than 0.2 are
-mostly good.
+Print the symmetricity of the calculated mode.
+See the description of @option{--mode} for more.
+This mode algorithm finds the mode based on how symmetric it is, so if the 
symmetricity returned by this option is too low, the mode is not too accurate.
+See Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015} for a full description.
+In practice, symmetricity values larger than 0.2 are mostly good.
 
 @item --modesymvalue
 Print the value in the distribution where the mirror and input
@@ -12582,24 +12344,17 @@ of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015} for
 more.
 
 @item --sigclip-number
-Number of elements after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Number of elements after applying @mymath{\sigma}-clipping (see @ref{Sigma 
clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @item --sigclip-median
-Median after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Median after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @cindex Outlier
-Here is one scenario where this can be useful: assume you have a table and
-you would like to remove the rows that are outliers (not within the
-@mymath{\sigma}-clipping range). Let's assume your table is called
-@file{table.fits} and you only want to keep the rows that have a value in
-@code{COLUMN} within the @mymath{\sigma}-clipped range (to
-@mymath{3\sigma}, with a tolerance of 0.1). This command will return the
-@mymath{\sigma}-clipped median and standard deviation (used to define the
-range later).
+Here is one scenario where this can be useful: assume you have a table and you 
would like to remove the rows that are outliers (not within the 
@mymath{\sigma}-clipping range).
+Let's assume your table is called @file{table.fits} and you only want to keep 
the rows that have a value in @code{COLUMN} within the @mymath{\sigma}-clipped 
range (to @mymath{3\sigma}, with a tolerance of 0.1).
+This command will return the @mymath{\sigma}-clipped median and standard 
deviation (used to define the range later).
 
 @example
 $ aststatistics table.fits -cCOLUMN --sclipparams=3,0.1 \
@@ -12607,18 +12362,12 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=3,0.1 \
 @end example
 
 @cindex GNU AWK
-You can then use the @option{--range} option of Table (see @ref{Table}) to
-select the proper rows. But for that, you need the actual starting and
-ending values of the range (@mymath{m\pm s\sigma}; where @mymath{m} is the
-median and @mymath{s} is the multiple of sigma to define an
-outlier). Therefore, the raw outputs of Statistics in the command above
-aren't enough.
+You can then use the @option{--range} option of Table (see @ref{Table}) to 
select the proper rows.
+But for that, you need the actual starting and ending values of the range 
(@mymath{m\pm s\sigma}; where @mymath{m} is the median and @mymath{s} is the 
multiple of sigma to define an outlier).
+Therefore, the raw outputs of Statistics in the command above aren't enough.
 
-To get the starting and ending values of the non-outlier range (and put a
-`@key{,}' between them, ready to be used in @option{--range}), pipe the
-result into AWK. But in AWK, we'll also need the multiple of
-@mymath{\sigma}, so we'll define it as a shell variable (@code{s}) before
-calling Statistics (note how @code{$s} is used two times now):
+To get the starting and ending values of the non-outlier range (and put a 
`@key{,}' between them, ready to be used in @option{--range}), pipe the result 
into AWK.
+But in AWK, we'll also need the multiple of @mymath{\sigma}, so we'll define 
it as a shell variable (@code{s}) before calling Statistics (note how @code{$s} 
is used two times now):
 
 @example
 $ s=3
@@ -12627,9 +12376,8 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=$s,0.1 \
      | awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}'
 @end example
 
-To pass it onto Table, we'll need to keep the printed output from the
-command above in another shell variable (@code{r}), not print it. In Bash,
-can do this by putting the whole statement within a @code{$()}:
+To pass it onto Table, we'll need to keep the printed output from the command 
above in another shell variable (@code{r}), not print it.
+In Bash, can do this by putting the whole statement within a @code{$()}:
 
 @example
 $ s=3
@@ -12639,53 +12387,38 @@ $ r=$(aststatistics table.fits -cCOLUMN 
--sclipparams=$s,0.1 \
 $ echo $r      # Just to confirm.
 @end example
 
-Now you can use Table with the @option{--range} option to only print the
-rows that have a value in @code{COLUMN} within the desired range:
+Now you can use Table with the @option{--range} option to only print the rows 
that have a value in @code{COLUMN} within the desired range:
 
 @example
 $ asttable table.fits --range=COLUMN,$r
 @end example
 
-To save the resulting table (that is clean of outliers) in another file
-(for example named @file{cleaned.fits}, it can also have a @file{.txt}
-suffix), just add @option{--output=cleaned.fits} to the command above.
+To save the resulting table (that is clean of outliers) in another file (for 
example named @file{cleaned.fits}, it can also have a @file{.txt} suffix), just 
add @option{--output=cleaned.fits} to the command above.
 
 
 @item --sigclip-mean
-Mean after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Mean after applying @mymath{\sigma}-clipping (see @ref{Sigma clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @item --sigclip-std
-Standard deviation after applying @mymath{\sigma}-clipping (see @ref{Sigma
-clipping}). @mymath{\sigma}-clipping configuration is done with the
-@option{--sigclipparams} option.
+Standard deviation after applying @mymath{\sigma}-clipping (see @ref{Sigma 
clipping}).
+@mymath{\sigma}-clipping configuration is done with the 
@option{--sigclipparams} option.
 
 @end table
 
-The list of options below are for those statistical operations that output
-more than one value. So while they can be called together in one run, their
-outputs will be distinct (each one's output will usually be printed in more
-than one line).
+The list of options below are for those statistical operations that output 
more than one value.
+So while they can be called together in one run, their outputs will be 
distinct (each one's output will usually be printed in more than one line).
 
 @table @option
 
 @item -A
 @itemx --asciihist
-Print an ASCII histogram of the usable values within the input dataset
-along with some basic information like the example below (from the UVUDF
-catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
 The
-width and height of the histogram (in units of character widths and heights
-on your command-line terminal) can be set with the @option{--numasciibins}
-(for the width) and @option{--asciiheight} options.
-
-For a full description of the histogram, please see @ref{Histogram and
-Cumulative Frequency Plot}. An ASCII plot is certainly very crude and
-cannot be used in any publication, but it is very useful for getting a
-general feeling of the input dataset very fast and easily on the
-command-line without having to take your hands off the keyboard (which is a
-major distraction!). If you want to try it out, you can write it all in one
-line and ignore the @key{\} and extra spaces.
+Print an ASCII histogram of the usable values within the input dataset along 
with some basic information like the example below (from the UVUDF 
catalog@footnote{@url{https://asd.gsfc.nasa.gov/UVUDF/uvudf_rafelski_2015.fits.gz}}).
+The width and height of the histogram (in units of character widths and 
heights on your command-line terminal) can be set with the 
@option{--numasciibins} (for the width) and @option{--asciiheight} options.
+
+For a full description of the histogram, please see @ref{Histogram and 
Cumulative Frequency Plot}.
+An ASCII plot is certainly very crude and cannot be used in any publication, 
but it is very useful for getting a general feeling of the input dataset very 
fast and easily on the command-line without having to take your hands off the 
keyboard (which is a major distraction!).
+If you want to try it out, you can write it all in one line and ignore the 
@key{\} and extra spaces.
 
 @example
 $ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
@@ -12710,11 +12443,9 @@ X: (linear: 17.7735 -- 31.4679, in 55 bins)
 @end example
 
 @item --asciicfp
-Print the cumulative frequency plot of the usable elements in the input
-dataset. Please see descriptions under @option{--asciihist} for more, the
-example below is from the same input table as that example. To better
-understand the cumulative frequency plot, please see @ref{Histogram and
-Cumulative Frequency Plot}.
+Print the cumulative frequency plot of the usable elements in the input 
dataset.
+Please see descriptions under @option{--asciihist} for more, the example below 
is from the same input table as that example.
+To better understand the cumulative frequency plot, please see @ref{Histogram 
and Cumulative Frequency Plot}.
 
 @example
 $ aststatistics uvudf_rafelski_2015.fits.gz --hdu=1         \
@@ -12739,182 +12470,118 @@ X: (linear: 17.7735 -- 31.4679, in 55 bins)
 
 @item -H
 @itemx --histogram
-Save the histogram of the usable values in the input dataset into a
-table. The first column is the value at the center of the bin and the
-second is the number of points in that bin. If the @option{--cumulative}
-option is also called with this option in a run, then the table will have
-three columns (the third is the cumulative frequency plot). Through the
-@option{--numbins}, @option{--onebinstart}, or @option{--manualbinrange},
-you can modify the first column values and with @option{--normalize} and
-@option{--maxbinone} you can modify the second columns. See below for the
-description of each.
-
-By default (when no @option{--output} is specified) a plain text table will
-be created, see @ref{Gnuastro text table format}. If a FITS name is
-specified, you can use the common option @option{--tableformat} to have it
-as a FITS ASCII or FITS binary format, see @ref{Common options}. This table
-can then be fed into your favorite plotting tool and get a much more clean
-and nice histogram than what the raw command-line can offer you (with the
-@option{--asciihist} option).
+Save the histogram of the usable values in the input dataset into a table.
+The first column is the value at the center of the bin and the second is the 
number of points in that bin.
+If the @option{--cumulative} option is also called with this option in a run, 
then the table will have three columns (the third is the cumulative frequency 
plot).
+Through the @option{--numbins}, @option{--onebinstart}, or 
@option{--manualbinrange}, you can modify the first column values and with 
@option{--normalize} and @option{--maxbinone} you can modify the second columns.
+See below for the description of each.
+
+By default (when no @option{--output} is specified) a plain text table will be 
created, see @ref{Gnuastro text table format}.
+If a FITS name is specified, you can use the common option 
@option{--tableformat} to have it as a FITS ASCII or FITS binary format, see 
@ref{Common options}.
+This table can then be fed into your favorite plotting tool and get a much 
more clean and nice histogram than what the raw command-line can offer you 
(with the @option{--asciihist} option).
 
 @item -C
 @itemx --cumulative
-Save the cumulative frequency plot of the usable values in the input
-dataset into a table, similar to @option{--histogram}.
+Save the cumulative frequency plot of the usable values in the input dataset 
into a table, similar to @option{--histogram}.
 
 @item -s
 @itemx --sigmaclip
-Do @mymath{\sigma}-clipping on the usable pixels of the input dataset. See
-@ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping and
-also to better understand this option. The @mymath{\sigma}-clipping
-parameters can be set through the @option{--sclipparams} option (see
-below).
+Do @mymath{\sigma}-clipping on the usable pixels of the input dataset.
+See @ref{Sigma clipping} for a full description on @mymath{\sigma}-clipping 
and also to better understand this option.
+The @mymath{\sigma}-clipping parameters can be set through the 
@option{--sclipparams} option (see below).
 
 @item --mirror=FLT
-Make a histogram and cumulative frequency plot of the mirror distribution
-for the given dataset when the mirror is located at the value to this
-option. The mirror distribution is fully described in Appendix C of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and
-currently it is only used to calculate the mode (see @option{--mode}).
-
-Just note that the mirror distribution is a discrete distribution like the
-input, so while you may give any number as the value to this option, the
-actual mirror value is the closest number in the input dataset to this
-value. If the two numbers are different, Statistics will warn you of the
-actual mirror value used.
-
-This option will make a table as output. Depending on your selected name
-for the output, it will be either a FITS table or a plain text table (which
-is the default). It contains three columns: the first is the center of the
-bins, the second is the histogram (with the largest value set to 1) and the
-third is the normalized cumulative frequency plot of the mirror
-distribution. The bins will be positioned such that the mode is on the
-starting interval of one of the bins to make it symmetric around the
-mirror. With this output file and the input histogram (that you can
-generate in another run of Statistics, using the @option{--onebinvalue}),
-it is possible to make plots like Figure 21 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015}.
+Make a histogram and cumulative frequency plot of the mirror distribution for 
the given dataset when the mirror is located at the value to this option.
+The mirror distribution is fully described in Appendix C of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 2015} and 
currently it is only used to calculate the mode (see @option{--mode}).
+
+Just note that the mirror distribution is a discrete distribution like the 
input, so while you may give any number as the value to this option, the actual 
mirror value is the closest number in the input dataset to this value.
+If the two numbers are different, Statistics will warn you of the actual 
mirror value used.
+
+This option will make a table as output.
+Depending on your selected name for the output, it will be either a FITS table 
or a plain text table (which is the default).
+It contains three columns: the first is the center of the bins, the second is 
the histogram (with the largest value set to 1) and the third is the normalized 
cumulative frequency plot of the mirror distribution.
+The bins will be positioned such that the mode is on the starting interval of 
one of the bins to make it symmetric around the mirror.
+With this output file and the input histogram (that you can generate in 
another run of Statistics, using the @option{--onebinvalue}), it is possible to 
make plots like Figure 21 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa 2015}.
 
 @end table
 
-The list of options below allow customization of the histogram and
-cumulative frequency plots (for the @option{--histogram},
-@option{--cumulative}, @option{--asciihist}, and @option{--asciicfp}
-options).
+The list of options below allow customization of the histogram and cumulative 
frequency plots (for the @option{--histogram}, @option{--cumulative}, 
@option{--asciihist}, and @option{--asciicfp} options).
 
 @table @option
 
 @item --numbins
-The number of bins (rows) to use in the histogram and the cumulative
-frequency plot tables (outputs of @option{--histogram} and
-@option{--cumulative}).
+The number of bins (rows) to use in the histogram and the cumulative frequency 
plot tables (outputs of @option{--histogram} and @option{--cumulative}).
 
 @item --numasciibins
-The number of bins (characters) to use in the ASCII plots when printing the
-histogram and the cumulative frequency plot (outputs of
-@option{--asciihist} and @option{--asciicfp}).
+The number of bins (characters) to use in the ASCII plots when printing the 
histogram and the cumulative frequency plot (outputs of @option{--asciihist} 
and @option{--asciicfp}).
 
 @item --asciiheight
-The number of lines to use when printing the ASCII histogram and cumulative
-frequency plot on the command-line (outputs of @option{--asciihist} and
-@option{--asciicfp}).
+The number of lines to use when printing the ASCII histogram and cumulative 
frequency plot on the command-line (outputs of @option{--asciihist} and 
@option{--asciicfp}).
 
 @item -n
 @itemx --normalize
-Normalize the histogram or cumulative frequency plot tables (outputs of
-@option{--histogram} and @option{--cumulative}). For a histogram, the sum
-of all bins will become one and for a cumulative frequency plot the last
-bin value will be one.
+Normalize the histogram or cumulative frequency plot tables (outputs of 
@option{--histogram} and @option{--cumulative}).
+For a histogram, the sum of all bins will become one and for a cumulative 
frequency plot the last bin value will be one.
 
 @item --maxbinone
-Divide all the histogram values by the maximum bin value so it becomes one
-and the rest are similarly scaled. In some situations (for example if you
-want to plot the histogram and cumulative frequency plot in one plot) this
-can be very useful.
+Divide all the histogram values by the maximum bin value so it becomes one and 
the rest are similarly scaled.
+In some situations (for example if you want to plot the histogram and 
cumulative frequency plot in one plot) this can be very useful.
 
 @item --onebinstart=FLT
-Make sure that one bin starts with the value to this option. In practice,
-this will shift the bins used to find the histogram and cumulative
-frequency plot such that one bin's lower interval becomes this value.
+Make sure that one bin starts with the value to this option.
+In practice, this will shift the bins used to find the histogram and 
cumulative frequency plot such that one bin's lower interval becomes this value.
 
-For example when a histogram range includes negative and positive values
-and zero has a special significance in your analysis, then zero might fall
-somewhere in one bin. As a result that bin will have counts of positive and
-negative. By setting @option{--onebinstart=0}, you can make sure that one
-bin will only count negative values in the vicinity of zero and the next
-bin will only count positive ones in that vicinity.
+For example when a histogram range includes negative and positive values and 
zero has a special significance in your analysis, then zero might fall 
somewhere in one bin.
+As a result that bin will have counts of positive and negative.
+By setting @option{--onebinstart=0}, you can make sure that one bin will only 
count negative values in the vicinity of zero and the next bin will only count 
positive ones in that vicinity.
 
 @cindex NaN
-Note that by default, the first row of the histogram and cumulative
-frequency plot show the central values of each bin. So in the example above
-you will not see the 0.000 in the first column, you will see two symmetric
-values.
+Note that by default, the first row of the histogram and cumulative frequency 
plot show the central values of each bin.
+So in the example above you will not see the 0.000 in the first column, you 
will see two symmetric values.
 
-If the value is not within the usable input range, this option will be
-ignored. When it is, this option is the last operation before the bins are
-finalized, therefore it has a higher priority than options like
-@option{--manualbinrange}.
+If the value is not within the usable input range, this option will be ignored.
+When it is, this option is the last operation before the bins are finalized, 
therefore it has a higher priority than options like @option{--manualbinrange}.
 
 @item --manualbinrange
-Use the values given to the @option{--greaterequal} and @option{--lessthan}
-to define the range of all bin-based calculations like the histogram. This
-option itself doesn't take any value, but just tells the program to use the
-values of those two options instead of the minimum and maximum values of a
-plot. If any of the two options are not given, then the minimum or maximum
-will be used respectively. Therefore, if none of them are called calling
-this option is redundant.
+Use the values given to the @option{--greaterequal} and @option{--lessthan} to 
define the range of all bin-based calculations like the histogram.
+This option itself doesn't take any value, but just tells the program to use 
the values of those two options instead of the minimum and maximum values of a 
plot.
+If any of the two options are not given, then the minimum or maximum will be 
used respectively.
+Therefore, if none of them are called calling this option is redundant.
 
 The @option{--onebinstart} option has a higher priority than this option.
-In other words, @option{--onebinstart} takes effect after the range has
-been finalized and the initial bins have been defined, therefore it has the
-power to (possibly) shift the bins. If you want to manually set the range
-of the bins @emph{and} have one bin on a special value, it is thus better
-to avoid @option{--onebinstart}.
+In other words, @option{--onebinstart} takes effect after the range has been 
finalized and the initial bins have been defined, therefore it has the power to 
(possibly) shift the bins.
+If you want to manually set the range of the bins @emph{and} have one bin on a 
special value, it is thus better to avoid @option{--onebinstart}.
 
 @end table
 
 
-All the options described until now were from the first class of operations
-discussed above: those that treat the whole dataset as one. However. It
-often happens that the relative position of the dataset elements over the
-dataset is significant. For example you don't want one median value for the
-whole input image, you want to know how the median changes over the
-image. For such operations, the input has to be tessellated (see
-@ref{Tessellation}). Thus this class of options can't currently be called
-along with the options above in one run of Statistics.
+All the options described until now were from the first class of operations 
discussed above: those that treat the whole dataset as one.
+However, it often happens that the relative position of the dataset elements 
over the dataset is significant.
+For example you don't want one median value for the whole input image, you 
want to know how the median changes over the image.
+For such operations, the input has to be tessellated (see @ref{Tessellation}).
+Thus this class of options can't currently be called along with the options 
above in one run of Statistics.
 
 @table @option
 
 @item -t
 @itemx --ontile
-Do the respective single-valued calculation over one tile of the input
-dataset, not the whole dataset. This option must be called with at least one
-of the single valued options discussed above (for example @option{--mean}
-or @option{--quantile}). The output will be a file in the same format as
-the input. If the @option{--oneelempertile} option is called, then one
-element/pixel will be used for each tile (see @ref{Processing
-options}). Otherwise, the output will have the same size as the input, but
-each element will have the value corresponding to that tile's value. If
-multiple single valued operations are called, then for each operation there
-will be one extension in the output FITS file.
+Do the respective single-valued calculation over one tile of the input 
dataset, not the whole dataset.
+This option must be called with at least one of the single valued options 
discussed above (for example @option{--mean} or @option{--quantile}).
+The output will be a file in the same format as the input.
+If the @option{--oneelempertile} option is called, then one element/pixel will 
be used for each tile (see @ref{Processing options}).
+Otherwise, the output will have the same size as the input, but each element 
will have the value corresponding to that tile's value.
+If multiple single valued operations are called, then for each operation there 
will be one extension in the output FITS file.
 
 @item -y
 @itemx --sky
-Estimate the Sky value on each tile as fully described in @ref{Quantifying
-signal in a tile}. As described in that section, several options are
-necessary to configure the Sky estimation which are listed below. The
-output file will have two extensions: the first is the Sky value and the
-second is the Sky standard deviation on each tile. Similar to
-@option{--ontile}, if the @option{--oneelempertile} option is called, then
-one element/pixel will be used for each tile (see @ref{Processing
-options}).
+Estimate the Sky value on each tile as fully described in @ref{Quantifying 
signal in a tile}.
+As described in that section, several options are necessary to configure the 
Sky estimation which are listed below.
+The output file will have two extensions: the first is the Sky value and the 
second is the Sky standard deviation on each tile.
+Similar to @option{--ontile}, if the @option{--oneelempertile} option is 
called, then one element/pixel will be used for each tile (see @ref{Processing 
options}).
 
 @end table
 
-The parameters for estimating the sky value can be set with the following
-options, except for the @option{--sclipparams} option (which is also used
-by the @option{--sigmaclip}), the rest are only used for the Sky value
-estimation.
+The parameters for estimating the sky value can be set with the following 
options, except for the @option{--sclipparams} option (which is also used by 
the @option{--sigmaclip}), the rest are only used for the Sky value estimation.
 
 @table @option
 
@@ -12928,69 +12595,50 @@ Kernel HDU to help in estimating the significance of 
signal in a tile, see
 @ref{Quantifying signal in a tile}.
 
 @item --meanmedqdiff=FLT
-The maximum acceptable distance between the quantiles of the mean and
-median, see @ref{Quantifying signal in a tile}. The initial Sky and its
-standard deviation estimates are measured on tiles where the quantiles of
-their mean and median are less distant than the value given to this
-option. For example @option{--meanmedqdiff=0.01} means that only tiles
-where the mean's quantile is between 0.49 and 0.51 (recall that the
-median's quantile is 0.5) will be used.
+The maximum acceptable distance between the quantiles of the mean and median, 
see @ref{Quantifying signal in a tile}.
+The initial Sky and its standard deviation estimates are measured on tiles 
where the quantiles of their mean and median are less distant than the value 
given to this option.
+For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --sclipparams=FLT,FLT
-The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}. This
-option takes two values which are separated by a comma (@key{,}). Each
-value can either be written as a single number or as a fraction of two
-numbers (for example @code{3,1/10}). The first value to this option is the
-multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that
-section). The second value is the exit criteria. If it is less than 1, then
-it is interpreted as tolerance and if it is larger than one it is a
-specific number. Hence, in the latter case the value must be an integer.
+The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}.
+This option takes two values which are separated by a comma (@key{,}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
+The second value is the exit criteria.
+If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is a specific number.
+Hence, in the latter case the value must be an integer.
 
 @item --outliersclip=FLT,FLT
 @mymath{\sigma}-clipping parameters for the outlier rejection of the Sky
 value (similar to @option{--sclipparams}).
 
-Outlier rejection is useful when the dataset contains a large and diffuse
-(almost flat within each tile) signal. The flatness of the profile will
-cause it to successfully pass the mean-median quantile difference test, so
-we'll need to use the distribution of successful tiles for removing these
-false positive. For more, see the latter half of @ref{Quantifying signal in
-a tile}.
+Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positive.
+For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
-Multiple of sigma to define an outlier in the Sky value estimation. If this
-option is given a value of zero, no outlier rejection will take place. For
-more see @option{--outliersclip} and the latter half of @ref{Quantifying
-signal in a tile}.
+Multiple of sigma to define an outlier in the Sky value estimation.
+If this option is given a value of zero, no outlier rejection will take place.
+For more see @option{--outliersclip} and the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item --smoothwidth=INT
-Width of a flat kernel to convolve the interpolated tile values. Tile
-interpolation is done using the median of the @option{--interpnumngb}
-neighbors of each tile (see @ref{Processing options}). If this option is
-given a value of zero or one, no smoothing will be done. Without smoothing,
-strong boundaries will probably be created between the values estimated for
-each tile. It is thus good to smooth the interpolated image so strong
-discontinuities do not show up in the final Sky values. The smoothing is
-done through convolution (see @ref{Convolution process}) with a flat
-kernel, so the value to this option must be an odd number.
+Width of a flat kernel to convolve the interpolated tile values.
+Tile interpolation is done using the median of the @option{--interpnumngb} 
neighbors of each tile (see @ref{Processing options}).
+If this option is given a value of zero or one, no smoothing will be done.
+Without smoothing, strong boundaries will probably be created between the 
values estimated for each tile.
+It is thus good to smooth the interpolated image so strong discontinuities do 
not show up in the final Sky values.
+The smoothing is done through convolution (see @ref{Convolution process}) with 
a flat kernel, so the value to this option must be an odd number.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for
-example Sky and Sky standard deviation extensions of the output). This is
-only applicable when the tiled output has the same size as the input, in
-other words, when @option{--oneelempertile} isn't called.
+Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
 
-By default, blank values in the input (commonly on the edges which are
-outside the survey/field area) will be set to blank in the tiled outputs
-also. But in other scenarios this default behavior is not desired: for
-example if you have masked something in the input, but want the tiled
-output under that also.
+By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
+But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item --checksky
-Create a multi-extension FITS file showing the steps that were used to
-estimate the Sky value over the input, see @ref{Quantifying signal in a
-tile}. The file will have two extensions for each step (one for the Sky and
-one for the Sky standard deviation).
+Create a multi-extension FITS file showing the steps that were used to 
estimate the Sky value over the input, see @ref{Quantifying signal in a tile}.
+The file will have two extensions for each step (one for the Sky and one for 
the Sky standard deviation).
 
 @end table
 
@@ -13000,131 +12648,75 @@ one for the Sky standard deviation).
 @cindex Labeling
 @cindex Detection
 @cindex Segmentation
-Once instrumental signatures are removed from the raw data (image) in the
-initial reduction process (see @ref{Data manipulation}). You are naturally
-eager to start answering the scientific questions that motivated the data
-collection in the first place. However, the raw dataset/image is just an
-array of values/pixels, that is all! These raw values cannot directly be
-used to answer your scientific questions: for example ``how many galaxies
-are there in the image?''.
-
-The first high-level step in your analysis of your targets will thus be to
-classify, or label, the dataset elements (pixels) into two classes: 1)
-noise, where random effects are the major contributor to the value, and 2)
-signal, where non-random factors (for example light from a distant galaxy)
-are present. This classification of the elements in a dataset is formally
-known as @emph{detection}.
-
-In an observational/experimental dataset, signal is always buried in noise:
-only mock/simulated datasets are free of noise. Therefore detection, or the
-process of separating signal from noise, determines the number of objects
-you study and the accuracy of any higher-level measurement you do on
-them. Detection is thus the most important step of any analysis and is not
-trivial. In particular, the most scientifically interesting astronomical
-targets are faint, can have a large variety of morphologies, along with a
-large distribution in brightness and size. Therefore when noise is
-significant, proper detection of your targets is a uniquely decisive step
-in your final scientific analysis/result.
+Once instrumental signatures are removed from the raw data (image) in the 
initial reduction process (see @ref{Data manipulation}).
+You are naturally eager to start answering the scientific questions that 
motivated the data collection in the first place.
+However, the raw dataset/image is just an array of values/pixels, that is all! 
These raw values cannot directly be used to answer your scientific questions: 
for example ``how many galaxies are there in the image?''.
+
+The first high-level step in your analysis of your targets will thus be to 
classify, or label, the dataset elements (pixels) into two classes: 1) noise, 
where random effects are the major contributor to the value, and 2) signal, 
where non-random factors (for example light from a distant galaxy) are present.
+This classification of the elements in a dataset is formally known as 
@emph{detection}.
+
+In an observational/experimental dataset, signal is always buried in noise: 
only mock/simulated datasets are free of noise.
+Therefore detection, or the process of separating signal from noise, 
determines the number of objects you study and the accuracy of any higher-level 
measurement you do on them.
+Detection is thus the most important step of any analysis and is not trivial.
+In particular, the most scientifically interesting astronomical targets are 
faint, can have a large variety of morphologies, along with a large 
distribution in brightness and size.
+Therefore when noise is significant, proper detection of your targets is a 
uniquely decisive step in your final scientific analysis/result.
 
 @cindex Erosion
-NoiseChisel is Gnuastro's program for detection of targets that don't have
-a sharp border (almost all astronomical objects). When the targets have a
-sharp edges/border (for example cells in biological imaging), a simple
-threshold is enough to separate them from noise and each other (if they are
-not touching). To detect such sharp-edged targets, you can use Gnuastro's
-Arithmetic program in a command like below (assuming the threshold is
-@code{100}, see @ref{Arithmetic}):
+NoiseChisel is Gnuastro's program for detection of targets that don't have a 
sharp border (almost all astronomical objects).
+When the targets have a sharp edges/border (for example cells in biological 
imaging), a simple threshold is enough to separate them from noise and each 
other (if they are not touching).
+To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program 
in a command like below (assuming the threshold is @code{100}, see 
@ref{Arithmetic}):
 
 @example
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-Since almost no astronomical target has such sharp edges, we need a more
-advanced detection methodology. NoiseChisel uses a new noise-based paradigm
-for detection of very extended and diffuse targets that are drowned deeply
-in the ocean of noise. It was initially introduced in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. The
-name of NoiseChisel is derived from the first thing it does after
-thresholding the dataset: to erode it. In mathematical morphology, erosion
-on pixels can be pictured as carving-off boundary pixels. Hence, what
-NoiseChisel does is similar to what a wood chisel or stone chisel do. It is
-just not a hardware, but a software. In fact, looking at it as a chisel and
-your dataset as a solid cube of rock will greatly help in effectively
-understanding and optimally using it: with NoiseChisel you literally carve
-your targets out of the noise. Try running it with the
-@option{--checkdetection} option to see each step of the carving process on
-your input dataset.
+Since almost no astronomical target has such sharp edges, we need a more 
advanced detection methodology.
+NoiseChisel uses a new noise-based paradigm for detection of very extended and 
diffuse targets that are drowned deeply in the ocean of noise.
+It was initially introduced in @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}.
+The name of NoiseChisel is derived from the first thing it does after 
thresholding the dataset: to erode it.
+In mathematical morphology, erosion on pixels can be pictured as carving-off 
boundary pixels.
+Hence, what NoiseChisel does is similar to what a wood chisel or stone chisel 
do.
+It is just not a hardware, but a software.
+In fact, looking at it as a chisel and your dataset as a solid cube of rock 
will greatly help in effectively understanding and optimally using it: with 
NoiseChisel you literally carve your targets out of the noise.
+Try running it with the @option{--checkdetection} option to see each step of 
the carving process on your input dataset.
 
 @cindex Segmentation
-NoiseChisel's primary output is a binary detection map with the same size
-as the input but only with two values: 0 and 1. Pixels that don't harbor
-any detected signal (noise) are given a label (or value) of zero and those
-with a value of 1 have been identified as hosting signal.
-
-Segmentation is the process of classifying the signal into higher-level
-constructs. For example if you have two separate galaxies in one image, by
-default NoiseChisel will give a value of 1 to the pixels of both, but after
-segmentation, the pixels in each will get separate labels. NoiseChisel is
-only focused on detection (separating signal from noise), to @emph{segment}
-the signal (into separate galaxies for example), Gnuastro has a separate
-specialized program @ref{Segment}. NoiseChisel's output can be
-directly/readily fed into Segment.
-
-For more on NoiseChisel's output format and its benefits (especially in
-conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see
-@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}. Just note that
-when that paper was published, Segment was not yet spun-off into a separate
-program, and NoiseChisel done both detection and segmentation.
-
-NoiseChisel's output is designed to be generic enough to be easily used in
-any higher-level analysis. If your targets are not touching after running
-NoiseChisel and you aren't interested in their sub-structure, you don't
-need the Segment program at all. You can ask NoiseChisel to find the
-connected pixels in the output with the @option{--label} option. In this
-case, the output won't be a binary image any more, the signal will have
-counters/labels starting from 1 for each connected group of pixels. You can
-then directly feed NoiseChisel's output into MakeCatalog for measurements
-over the detections and the production of a catalog (see
-@ref{MakeCatalog}).
-
-Thanks to the published papers mentioned above, there is no need to provide
-a more complete introduction to NoiseChisel in this book. However,
-published papers cannot be updated any more, but the software has
-evolved/changed. The changes since publication are documented in
-@ref{NoiseChisel changes after publication}. Afterwards, in @ref{Invoking
-astnoisechisel}, the details of running NoiseChisel and its options are
-discussed.
-
-As discussed above, detection is one of the most important steps for your
-scientific result. It is therefore very important to obtain a good
-understanding of NoiseChisel (and afterwards @ref{Segment} and
-@ref{MakeCatalog}). We thus strongly recommend that after reading the
-papers above and the respective sections of Gnuastro's book, you play a
-little with the settings (in the order presented in the paper and
-@ref{Invoking astnoisechisel}) on a dataset you are familiar with and
-inspect all the check images (options starting with @option{--check}) to
-see the effect of each parameter.
-
-We strongly recommend going over the two tutorials of @ref{General program
-usage tutorial} and @ref{Detecting large extended targets}. They are
-designed to show how to most effectively use NoiseChisel for the detection
-of small faint objects and large extended objects. In the meantime, they
-will show you the modular principle behind Gnuastro's programs and how they
-are built to complement, and build upon, each other. @ref{General program
-usage tutorial} culminates in using NoiseChisel to detect galaxies and use
-its outputs to find the galaxy colors. Defining colors is a very common
-process in most science-cases. Therefore it is also recommended to
-(patiently) complete that tutorial for optimal usage of NoiseChisel in
-conjunction with all the other Gnuastro programs. @ref{Detecting large
-extended targets} shows you can optimize NoiseChisel's settings for very
-extended objects to successfully carve out to signal-to-noise ratio levels
-of below 1/10.
-
-In @ref{NoiseChisel changes after publication}, we'll review the changes in
-NoiseChisel since the publication of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. We will then review NoiseChisel's input,
-detection, and output options in @ref{NoiseChisel input}, @ref{Detection
-options}, and @ref{NoiseChisel output}.
+NoiseChisel's primary output is a binary detection map with the same size as 
the input but only with two values: 0 and 1.
+Pixels that don't harbor any detected signal (noise) are given a label (or 
value) of zero and those with a value of 1 have been identified as hosting 
signal.
+
+Segmentation is the process of classifying the signal into higher-level 
constructs.
+For example if you have two separate galaxies in one image, by default 
NoiseChisel will give a value of 1 to the pixels of both, but after 
segmentation, the pixels in each will get separate labels.
+NoiseChisel is only focused on detection (separating signal from noise), to 
@emph{segment} the signal (into separate galaxies for example), Gnuastro has a 
separate specialized program @ref{Segment}.
+NoiseChisel's output can be directly/readily fed into Segment.
+
+For more on NoiseChisel's output format and its benefits (especially in 
conjunction with @ref{Segment} and later @ref{MakeCatalog}), please see 
@url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
+Just note that when that paper was published, Segment was not yet spun-off 
into a separate program, and NoiseChisel done both detection and segmentation.
+
+NoiseChisel's output is designed to be generic enough to be easily used in any 
higher-level analysis.
+If your targets are not touching after running NoiseChisel and you aren't 
interested in their sub-structure, you don't need the Segment program at all.
+You can ask NoiseChisel to find the connected pixels in the output with the 
@option{--label} option.
+In this case, the output won't be a binary image any more, the signal will 
have counters/labels starting from 1 for each connected group of pixels.
+You can then directly feed NoiseChisel's output into MakeCatalog for 
measurements over the detections and the production of a catalog (see 
@ref{MakeCatalog}).
+
+Thanks to the published papers mentioned above, there is no need to provide a 
more complete introduction to NoiseChisel in this book.
+However, published papers cannot be updated any more, but the software has 
evolved/changed.
+The changes since publication are documented in @ref{NoiseChisel changes after 
publication}.
+Afterwards, in @ref{Invoking astnoisechisel}, the details of running 
NoiseChisel and its options are discussed.
+
+As discussed above, detection is one of the most important steps for your 
scientific result.
+It is therefore very important to obtain a good understanding of NoiseChisel 
(and afterwards @ref{Segment} and @ref{MakeCatalog}).
+We thus strongly recommend that after reading the papers above and the 
respective sections of Gnuastro's book, you play a little with the settings (in 
the order presented in the paper and @ref{Invoking astnoisechisel}) on a 
dataset you are familiar with and inspect all the check images (options 
starting with @option{--check}) to see the effect of each parameter.
+
+We strongly recommend going over the two tutorials of @ref{General program 
usage tutorial} and @ref{Detecting large extended targets}.
+They are designed to show how to most effectively use NoiseChisel for the 
detection of small faint objects and large extended objects.
+In the meantime, they will show you the modular principle behind Gnuastro's 
programs and how they are built to complement, and build upon, each other.
+@ref{General program usage tutorial} culminates in using NoiseChisel to detect 
galaxies and use its outputs to find the galaxy colors.
+Defining colors is a very common process in most science-cases.
+Therefore it is also recommended to (patiently) complete that tutorial for 
optimal usage of NoiseChisel in conjunction with all the other Gnuastro 
programs.
+@ref{Detecting large extended targets} shows you can optimize NoiseChisel's 
settings for very extended objects to successfully carve out to signal-to-noise 
ratio levels of below 1/10.
+
+In @ref{NoiseChisel changes after publication}, we'll review the changes in 
NoiseChisel since the publication of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
+We will then review NoiseChisel's input, detection, and output options in 
@ref{NoiseChisel input}, @ref{Detection options}, and @ref{NoiseChisel output}.
 
 @menu
 * NoiseChisel changes after publication::  NoiseChisel updates after paper's 
publication.
@@ -13134,51 +12726,34 @@ options}, and @ref{NoiseChisel output}.
 @node NoiseChisel changes after publication, Invoking astnoisechisel, 
NoiseChisel, NoiseChisel
 @subsection NoiseChisel changes after publication
 
-NoiseChisel was initially introduced in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. It is
-thus strongly recommended to read this paper for a good understanding of
-what it does and how each parameter influences the output. To help in
-understanding how it works, that paper has a large number of figures
-showing every step on multiple mock and real examples.
-
-However, the paper cannot be updated anymore, but NoiseChisel has evolved
-(and will continue to do so): better algorithms or steps have been found,
-thus options will be added or removed. This book is thus the final and
-definitive guide to NoiseChisel. The aim of this section is to make the
-transition from the paper to the installed version on your system, as
-smooth as possible with the list below. For a more detailed list of changes
-in previous Gnuastro releases/versions, please see the @file{NEWS}
-file@footnote{The @file{NEWS} file is present in the released Gnuastro
-tarball, see @ref{Release tarball}.}.
-
-The most important change since the publication of that paper is that from
-Gnuastro 0.6, NoiseChisel is only in charge on detection. Segmentation of
-the detected signal was spun-off into a separate program:
-@ref{Segment}. This spin-off allows much greater creativity and is in the
-spirit of Gnuastro's modular design (see @ref{Program design philosophy}).
-Below you can see the major changes since that paper was published. First,
-the removed options/features are discussed, then we review the new features
-that have been added.
+NoiseChisel was initially introduced in @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
+It is thus strongly recommended to read this paper for a good understanding of 
what it does and how each parameter influences the output.
+To help in understanding how it works, that paper has a large number of 
figures showing every step on multiple mock and real examples.
+
+However, the paper cannot be updated anymore, but NoiseChisel has evolved (and 
will continue to do so): better algorithms or steps have been found, thus 
options will be added or removed.
+This book is thus the final and definitive guide to NoiseChisel.
+The aim of this section is to make the transition from the paper to the 
installed version on your system, as smooth as possible with the list below.
+For a more detailed list of changes in previous Gnuastro releases/versions, 
please see the @file{NEWS} file@footnote{The @file{NEWS} file is present in the 
released Gnuastro tarball, see @ref{Release tarball}.}.
+
+The most important change since the publication of that paper is that from 
Gnuastro 0.6, NoiseChisel is only in charge on detection.
+Segmentation of the detected signal was spun-off into a separate program: 
@ref{Segment}.
+This spin-off allows much greater creativity and is in the spirit of 
Gnuastro's modular design (see @ref{Program design philosophy}).
+Below you can see the major changes since that paper was published.
+First, the removed options/features are discussed, then we review the new 
features that have been added.
 
 @noindent
 Removed features/options:
 @itemize
 
 @item
-@option{--skysubtracted}: This option was used to account for the extra
-noise that is added if the Sky value has already been subtracted. However,
-NoiseChisel estimates the Sky standard deviation based on the input data,
-not exposure maps or other theoretical considerations. Therefore the
-standard deviation of the undetected pixels also contains the errors due to
-any previous sky subtraction. This option is therefore no longer present in
-NoiseChisel.
-
-@option{--dilate}: In the paper, true detections were dilated for a final
-dig into the noise. However, the simple 8-connected dilation produced boxy
-results which were not realistic and could miss diffuse flux. The final dig
-into the noise is now done by ``grow''ing the true detections, similar to
-how true clumps were grown, see the description of @option{--detgrowquant}
-below and in @ref{Detection options} for more on the new alternative.
+@option{--skysubtracted}: This option was used to account for the extra noise 
that is added if the Sky value has already been subtracted.
+However, NoiseChisel estimates the Sky standard deviation based on the input 
data, not exposure maps or other theoretical considerations.
+Therefore the standard deviation of the undetected pixels also contains the 
errors due to any previous sky subtraction.
+This option is therefore no longer present in NoiseChisel.
+
+@option{--dilate}: In the paper, true detections were dilated for a final dig 
into the noise.
+However, the simple 8-connected dilation produced boxy results which were not 
realistic and could miss diffuse flux.
+The final dig into the noise is now done by ``grow''ing the true detections, 
similar to how true clumps were grown, see the description of 
@option{--detgrowquant} below and in @ref{Detection options} for more on the 
new alternative.
 
 @item
 Segmentation has been completely moved to a new program: @ref{Segment}.
@@ -13189,103 +12764,68 @@ Added features/options:
 @itemize
 
 @item
-The quantile difference to identify tiles with no significant signal is
-measured between the @emph{mean} and median. In the published paper, it was
-between the @emph{mode} and median. The quantile of the mean is more
-sensitive to skewness (the presence of signal), so it is preferable to the
-quantile of the mode. For more see @ref{Quantifying signal in a tile}.
+The quantile difference to identify tiles with no significant signal is 
measured between the @emph{mean} and median.
+In the published paper, it was between the @emph{mode} and median.
+The quantile of the mean is more sensitive to skewness (the presence of 
signal), so it is preferable to the quantile of the mode.
+For more see @ref{Quantifying signal in a tile}.
 
 @item
-@option{--widekernel}: NoiseChisel uses the difference between the mean and
-median to identify if a tile should be used for estimating the quantile
-thresholds (see @ref{Quantifying signal in a tile}). Until now, NoiseChisel
-would convolve an image once and estimate the proper tiles for quantile
-estimations on the convolved image. The same convolved image would later be
-used for quantile estimation. A larger kernel does increase the skewness
-(and thus difference between the mean and median, therefore helps in
-detecting the presence signal), however, it disfigures the
-shapes/morphology of the objects.
-
-This new @option{--widekernel} option (and a corresponding @option{--wkhdu}
-option to specify its HDU) is added to solve such cases. When its
-given, the input will be convolved with both the sharp (given through the
-@option{--kernel} option) and wide kernels. The mean and median are
-calculated on the dataset that is convolved with the wider kernel, then the
-quantiles are estimated on the image convolved with the sharper kernel.
+@option{--widekernel}: NoiseChisel uses the difference between the mean and 
median to identify if a tile should be used for estimating the quantile 
thresholds (see @ref{Quantifying signal in a tile}).
+Until now, NoiseChisel would convolve an image once and estimate the proper 
tiles for quantile estimations on the convolved image.
+The same convolved image would later be used for quantile estimation.
+A larger kernel does increase the skewness (and thus difference between the 
mean and median, therefore helps in detecting the presence signal), however, it 
disfigures the shapes/morphology of the objects.
+
+This new @option{--widekernel} option (and a corresponding @option{--wkhdu} 
option to specify its HDU) is added to solve such cases.
+When its given, the input will be convolved with both the sharp (given through 
the @option{--kernel} option) and wide kernels.
+The mean and median are calculated on the dataset that is convolved with the 
wider kernel, then the quantiles are estimated on the image convolved with the 
sharper kernel.
 
 @item
-Outlier rejection in quantile thresholds: When there are large galaxies or
-bright stars in the image, their gradient may be on a smaller scale than
-the selected tile size. In such cases, those tiles will be identified as
-tiles with no signal and thus preserved. An outlier identification
-algorithm has been added to NoiseChisel and can be configured with the
-following options: @option{--outliersigma} and @option{--outliersclip}. For
-a more complete description, see the latter half of @ref{Quantifying signal
-in a tile}.
+Outlier rejection in quantile thresholds: When there are large galaxies or 
bright stars in the image, their gradient may be on a smaller scale than the 
selected tile size.
+In such cases, those tiles will be identified as tiles with no signal and thus 
preserved.
+An outlier identification algorithm has been added to NoiseChisel and can be 
configured with the following options: @option{--outliersigma} and 
@option{--outliersclip}.
+For a more complete description, see the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item
-@option{--blankasforeground}: allows blank pixels to be treated as
-foreground in NoiseChisel's binary operations: the initial erosion
-(@option{--erode}) and opening (@option{--open}) as well as the filling
-holes and opening step for defining pseudo-detections
-(@option{--dthresh}). In the published paper, blank pixels were treated as
-foreground by default. To avoid too many false positive near blank/masked
-regions, blank pixels are now considered to be in the background. This
-option will create the old behavior.
+@option{--blankasforeground}: allows blank pixels to be treated as foreground 
in NoiseChisel's binary operations: the initial erosion (@option{--erode}) and 
opening (@option{--open}) as well as the filling holes and opening step for 
defining pseudo-detections (@option{--dthresh}).
+In the published paper, blank pixels were treated as foreground by default.
+To avoid too many false positive near blank/masked regions, blank pixels are 
now considered to be in the background.
+This option will create the old behavior.
 
 @item
-@option{--skyfracnoblank}: To reduce the bias caused by undetected wings of
-galaxies and stars in the Sky measurements, NoiseChisel only uses tiles
-that have a sufficiently large fraction of undetected pixels. Until now the
-reference for this fraction was the whole tile size. With this option, it
-is now possible to ask for ignoring blank pixels when calculating the
-fraction. This is useful when blank/masked pixels are distributed across
-the image. For more, see the description of this option in @ref{Detection
-options}.
+@option{--skyfracnoblank}: To reduce the bias caused by undetected wings of 
galaxies and stars in the Sky measurements, NoiseChisel only uses tiles that 
have a sufficiently large fraction of undetected pixels.
+Until now the reference for this fraction was the whole tile size.
+With this option, it is now possible to ask for ignoring blank pixels when 
calculating the fraction.
+This is useful when blank/masked pixels are distributed across the image.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--dopening}: Number of openings after applying
-@option{--dthresh}. For more, see the description of this option in
-@ref{Detection options}.
+@option{--dopening}: Number of openings after applying @option{--dthresh}.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--dopeningngb}: The connectivity/neighbors used in the opening of
-@option{--dopening}.
+@option{--dopeningngb}: The connectivity/neighbors used in the opening of 
@option{--dopening}.
 
 @item
-@option{--holengb}: The connectivity (defined by the number of neighbors)
-to fill holes after applying @option{--dthresh} (above) to find
-pseudo-detections. For more, see the description of this option in
-@ref{Detection options}.
+@option{--holengb}: The connectivity (defined by the number of neighbors) to 
fill holes after applying @option{--dthresh} (above) to find pseudo-detections.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--pseudoconcomp}: The connectivity (defined by the number of
-neighbors) to find individual pseudo-detections. For more, see the
-description of this option in @ref{Detection options}.
+@option{--pseudoconcomp}: The connectivity (defined by the number of 
neighbors) to find individual pseudo-detections.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--snthresh}: Manually set the S/N of true pseudo-detections and
-thus avoid the need to automatically identify this value. For more, see the
-description of this option in @ref{Detection options}.
+@option{--snthresh}: Manually set the S/N of true pseudo-detections and thus 
avoid the need to automatically identify this value.
+For more, see the description of this option in @ref{Detection options}.
 
 @item
-@option{--detgrowquant}: is used to grow the final true detections until a
-given quantile in the same way that clumps are grown during segmentation
-(compare columns 2 and 3 in Figure 10 of the paper). It replaces the old
-@option{--dilate} option in the paper and older versions of
-Gnuastro. Dilation is a blind growth method which causes objects to be boxy
-or diamond shaped when too many layers are added. However, with the growth
-method that is defined now, we can follow the signal into the noise with
-any shape. The appropriate quantile depends on your dataset's correlated
-noise properties and how cleanly it was Sky subtracted. The new
-@option{--detgrowmaxholesize} can also be used to specify the maximum hole
-size to fill as part of this growth, see the description in @ref{Detection
-options} for more details.
-
-This new growth process can be much more successful in detecting diffuse
-flux around true detections compared to dilation and give more realistic
-results, but it can also increase the NoiseChisel run time (depending on
-the given value and input size).
+@option{--detgrowquant}: is used to grow the final true detections until a 
given quantile in the same way that clumps are grown during segmentation 
(compare columns 2 and 3 in Figure 10 of the paper).
+It replaces the old @option{--dilate} option in the paper and older versions 
of Gnuastro.
+Dilation is a blind growth method which causes objects to be boxy or diamond 
shaped when too many layers are added.
+However, with the growth method that is defined now, we can follow the signal 
into the noise with any shape.
+The appropriate quantile depends on your dataset's correlated noise properties 
and how cleanly it was Sky subtracted.
+The new @option{--detgrowmaxholesize} can also be used to specify the maximum 
hole size to fill as part of this growth, see the description in @ref{Detection 
options} for more details.
+
+This new growth process can be much more successful in detecting diffuse flux 
around true detections compared to dilation and give more realistic results, 
but it can also increase the NoiseChisel run time (depending on the given value 
and input size).
 
 @item
 @option{--cleangrowndet}: Further clean/remove false detections after
@@ -13297,12 +12837,9 @@ growth, see the descriptions under this option in 
@ref{Detection options}.
 @node Invoking astnoisechisel,  , NoiseChisel changes after publication, 
NoiseChisel
 @subsection Invoking NoiseChisel
 
-NoiseChisel will detect signal in noise producing a multi-extension dataset
-containing a binary detection map which is the same size as the input. Its
-output can be readily used for input into @ref{Segment}, for higher-level
-segmentation, or @ref{MakeCatalog} to do measurements and generate a
-catalog. The executable name is @file{astnoisechisel} with the following
-general template
+NoiseChisel will detect signal in noise producing a multi-extension dataset 
containing a binary detection map which is the same size as the input.
+Its output can be readily used for input into @ref{Segment}, for higher-level 
segmentation, or @ref{MakeCatalog} to do measurements and generate a catalog.
+The executable name is @file{astnoisechisel} with the following general 
template
 
 @example
 $ astnoisechisel [OPTION ...] InputImage.fits
@@ -13326,85 +12863,56 @@ $ astnoisechisel --numchannels=4,1 --tilesize=100,100 
input.fits
 
 @cindex Gaussian
 @noindent
-If NoiseChisel is to do processing (for example you don't want to get help,
-or see the values to each input parameter), an input image should be
-provided with the recognized extensions (see @ref{Arguments}).  NoiseChisel
-shares a large set of common operations with other Gnuastro programs,
-mainly regarding input/output, general processing steps, and general
-operating modes. To help in a unified experience between all of Gnuastro's
-programs, these operations have the same command-line options, see
-@ref{Common options} for a full list/description (they are not repeated
-here).
-
-As in all Gnuastro programs, options can also be given to NoiseChisel in
-configuration files. For a thorough description on Gnuastro's configuration
-file parsing, please see @ref{Configuration files}. All of NoiseChisel's
-options with a short description are also always available on the
-command-line with the @option{--help} option, see @ref{Getting help}. To
-inspect the option values without actually running NoiseChisel, append your
-command with @option{--printparams} (or @option{-P}).
-
-NoiseChisel's input image may contain blank elements (see @ref{Blank
-pixels}). Blank elements will be ignored in all steps of NoiseChisel. Hence
-if your dataset has bad pixels which should be masked with a mask image,
-please use Gnuastro's @ref{Arithmetic} program (in particular its
-@command{where} operator) to convert those pixels to blank pixels before
-running NoiseChisel. Gnuastro's Arithmetic program has bitwise operators
-helping you select specific kinds of bad-pixels when necessary.
-
-A convolution kernel can also be optionally given. If a value (file name)
-is given to @option{--kernel} on the command-line or in a configuration
-file (see @ref{Configuration files}), then that file will be used to
-convolve the image prior to thresholding. Otherwise a default kernel will
-be used. The default kernel is a 2D Gaussian with a FWHM of 2 pixels
-truncated at 5 times the FWHM. This choice of the default kernel is
-discussed in Section 3.1.1 of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. See @ref{Convolution kernel} for kernel
-related options. Passing @code{none} to @option{--kernel} will disable
-convolution. On the other hand, through the @option{--convolved} option,
-you may provide an already convolved image, see descriptions below for
-more.
-
-NoiseChisel defines two tessellations over the input (see
-@ref{Tessellation}). This enables it to deal with possible gradients in the
-input dataset and also significantly improve speed by processing each tile
-on different threads simultaneously. Tessellation related options are
-discussed in @ref{Processing options}. In particular, NoiseChisel uses two
-tessellations (with everything between them identical except the tile
-sizes): a fine-grained one with smaller tiles (used in thresholding and Sky
-value estimations) and another with larger tiles which is used for
-pseudo-detections over non-detected regions of the image. The common
-Tessellation options described in @ref{Processing options} define all
-parameters of both tessellations. The large tile size for the latter
-tessellation is set through the @option{--largetilesize} option. To inspect
-the tessellations on your input dataset, run NoiseChisel with
-@option{--checktiles}.
+If NoiseChisel is to do processing (for example you don't want to get help, or 
see the values to each input parameter), an input image should be provided with 
the recognized extensions (see @ref{Arguments}).
+NoiseChisel shares a large set of common operations with other Gnuastro 
programs, mainly regarding input/output, general processing steps, and general 
operating modes.
+To help in a unified experience between all of Gnuastro's programs, these 
operations have the same command-line options, see @ref{Common options} for a 
full list/description (they are not repeated here).
+
+As in all Gnuastro programs, options can also be given to NoiseChisel in 
configuration files.
+For a thorough description on Gnuastro's configuration file parsing, please 
see @ref{Configuration files}.
+All of NoiseChisel's options with a short description are also always 
available on the command-line with the @option{--help} option, see @ref{Getting 
help}.
+To inspect the option values without actually running NoiseChisel, append your 
command with @option{--printparams} (or @option{-P}).
+
+NoiseChisel's input image may contain blank elements (see @ref{Blank pixels}).
+Blank elements will be ignored in all steps of NoiseChisel.
+Hence if your dataset has bad pixels which should be masked with a mask image, 
please use Gnuastro's @ref{Arithmetic} program (in particular its 
@command{where} operator) to convert those pixels to blank pixels before 
running NoiseChisel.
+Gnuastro's Arithmetic program has bitwise operators helping you select 
specific kinds of bad-pixels when necessary.
+
+A convolution kernel can also be optionally given.
+If a value (file name) is given to @option{--kernel} on the command-line or in 
a configuration file (see @ref{Configuration files}), then that file will be 
used to convolve the image prior to thresholding.
+Otherwise a default kernel will be used.
+The default kernel is a 2D Gaussian with a FWHM of 2 pixels truncated at 5 
times the FWHM.
+This choice of the default kernel is discussed in Section 3.1.1 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+See @ref{Convolution kernel} for kernel related options.
+Passing @code{none} to @option{--kernel} will disable convolution.
+On the other hand, through the @option{--convolved} option, you may provide an 
already convolved image, see descriptions below for more.
+
+NoiseChisel defines two tessellations over the input (see @ref{Tessellation}).
+This enables it to deal with possible gradients in the input dataset and also 
significantly improve speed by processing each tile on different threads 
simultaneously.
+Tessellation related options are discussed in @ref{Processing options}.
+In particular, NoiseChisel uses two tessellations (with everything between 
them identical except the tile sizes): a fine-grained one with smaller tiles 
(used in thresholding and Sky value estimations) and another with larger tiles 
which is used for pseudo-detections over non-detected regions of the image.
+The common Tessellation options described in @ref{Processing options} define 
all parameters of both tessellations.
+The large tile size for the latter tessellation is set through the 
@option{--largetilesize} option.
+To inspect the tessellations on your input dataset, run NoiseChisel with 
@option{--checktiles}.
 
 @cartouche
 @noindent
-@strong{Usage TIP:} Frequently use the options starting with
-@option{--check}. Since the noise properties differ between different
-datasets, you can often play with the parameters/options for a better
-result than the default parameters. You can start with
-@option{--checkdetection} for the main steps. For the full list of
-NoiseChisel's checking options please run:
+@strong{Usage TIP:} Frequently use the options starting with @option{--check}.
+Since the noise properties differ between different datasets, you can often 
play with the parameters/options for a better result than the default 
parameters.
+You can start with @option{--checkdetection} for the main steps.
+For the full list of NoiseChisel's checking options please run:
 @example
 $ astnoisechisel --help | grep check
 @end example
 @end cartouche
 
-Below, we'll discuss NoiseChisel's options, classified into two general
-classes, to help in easy navigation. @ref{NoiseChisel input} mainly
-discusses the basic options relating to inputs and prior to the detection
-process detection. Afterwards, @ref{Detection options} fully describes
-every configuration parameter (option) related to detection and how they
-affect the final result. The order of options in this section follow the
-logical order within NoiseChisel. On first reading (while you are still new
-to NoiseChisel), it is therefore strongly recommended to read the options
-in the given order below. The output of @option{--printparams} (or
-@option{-P}) also has this order. However, the output of @option{--help} is
-sorted alphabetically. Finally, in @ref{NoiseChisel output} the format of
-NoiseChisel's output is discussed.
+Below, we'll discuss NoiseChisel's options, classified into two general 
classes, to help in easy navigation.
+@ref{NoiseChisel input} mainly discusses the basic options relating to inputs 
and prior to the detection process detection.
+Afterwards, @ref{Detection options} fully describes every configuration 
parameter (option) related to detection and how they affect the final result.
+The order of options in this section follow the logical order within 
NoiseChisel.
+On first reading (while you are still new to NoiseChisel), it is therefore 
strongly recommended to read the options in the given order below.
+The output of @option{--printparams} (or @option{-P}) also has this order.
+However, the output of @option{--help} is sorted alphabetically.
+Finally, in @ref{NoiseChisel output} the format of NoiseChisel's output is 
discussed.
 
 @menu
 * NoiseChisel input::           NoiseChisel's input options.
@@ -13415,87 +12923,60 @@ NoiseChisel's output is discussed.
 @node NoiseChisel input, Detection options, Invoking astnoisechisel, Invoking 
astnoisechisel
 @subsubsection NoiseChisel input
 
-The options here can be used to configure the inputs and output of
-NoiseChisel, along with some general processing options. Recall that you
-can always see the full list of Gnuastro's options with the @option{--help}
-(see @ref{Getting help}), or @option{--printparams} (or @option{-P}) to see
-their values (see @ref{Operating mode options}).
+The options here can be used to configure the inputs and output of 
NoiseChisel, along with some general processing options.
+Recall that you can always see the full list of Gnuastro's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -k STR
 @itemx --kernel=STR
-File name of kernel to smooth the image before applying the threshold, see
-@ref{Convolution kernel}. If no convolution is needed, give this option a
-value of @option{none}.
+File name of kernel to smooth the image before applying the threshold, see 
@ref{Convolution kernel}.
+If no convolution is needed, give this option a value of @option{none}.
 
-The first step of NoiseChisel is to convolve/smooth the image and use the
-convolved image in multiple steps including the finding and applying of the
-quantile threshold (see @option{--qthresh}).
+The first step of NoiseChisel is to convolve/smooth the image and use the 
convolved image in multiple steps including the finding and applying of the 
quantile threshold (see @option{--qthresh}).
 
-The @option{--kernel} option is not mandatory. If not called, a 2D Gaussian
-profile with a FWHM of 2 pixels truncated at 5 times the FWHM is used. This
-choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi and
-Ichikawa [2015]. You can use MakeProfiles to build a kernel with any of its
-recognized profile types and parameters. For more details, please see
-@ref{MakeProfiles output dataset}. For example, the command below will make
-a Moffat kernel (with @mymath{\beta=2.8}) with FWHM of 2 pixels truncated
-at 10 times the FWHM.
+The @option{--kernel} option is not mandatory.
+If not called, a 2D Gaussian profile with a FWHM of 2 pixels truncated at 5 
times the FWHM is used.
+This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi 
and Ichikawa [2015].
+You can use MakeProfiles to build a kernel with any of its recognized profile 
types and parameters.
+For more details, please see @ref{MakeProfiles output dataset}.
+For example, the command below will make a Moffat kernel (with 
@mymath{\beta=2.8}) with FWHM of 2 pixels truncated at 10 times the FWHM.
 
 @example
 $ astmkprof --oversample=1 --kernel=moffat,2,2.8,10
 @end example
 
-Since convolution can be the slowest step of NoiseChisel, for large
-datasets, you can convolve the image once with Gnuastro's Convolve (see
-@ref{Convolve}), and use the @option{--convolved} option to feed it
-directly to NoiseChisel. This can help getting faster results when you are
-playing/testing the higher-level options.
+Since convolution can be the slowest step of NoiseChisel, for large datasets, 
you can convolve the image once with Gnuastro's Convolve (see @ref{Convolve}), 
and use the @option{--convolved} option to feed it directly to NoiseChisel.
+This can help getting faster results when you are playing/testing the 
higher-level options.
 
 @item --khdu=STR
 HDU containing the kernel in the file given to the @option{--kernel}
 option.
 
 @item --convolved=STR
-Use this file as the convolved image and don't do convolution (ignore
-@option{--kernel}). NoiseChisel will just check the size of the given
-dataset is the same as the input's size. If a wrong image (with the same
-size) is given to this option, the results (errors, bugs, etc) are
-unpredictable. So please use this option with care and in a highly
-controlled environment, for example in the scenario discussed below.
-
-In almost all situations, as the input gets larger, the single most CPU
-(and time) consuming step in NoiseChisel (and other programs that need a
-convolved image) is convolution. Therefore minimizing the number of
-convolutions can save a significant amount of time in some scenarios. One
-such scenario is when you want to segment NoiseChisel's detections using
-the same kernel (with @ref{Segment}, which also supports this
-@option{--convolved} option). This scenario would require two convolutions
-of the same dataset: once by NoiseChisel and once by Segment. Using this
-option in both programs, only one convolution (prior to running
-NoiseChisel) is enough.
-
-Another common scenario where this option can be convenient is when you are
-testing NoiseChisel (or Segment) for the best parameters. You have to run
-NoiseChisel multiple times and see the effect of each change. However, once
-you are happy with the kernel, re-convolving the input on every change of
-higher-level parameters will greatly hinder, or discourage, further
-testing. With this option, you can convolve the input image with your
-chosen kernel once before running NoiseChisel, then feed it to NoiseChisel
-on each test run and thus save valuable time for better/more tests.
-
-To build your desired convolution kernel, you can use
-@ref{MakeProfiles}. To convolve the image with a given kernel you can use
-@ref{Convolve}. Spatial domain convolution is mandatory: in the frequency
-domain, blank pixels (if present) will cover the whole image and gradients
-will appear on the edges, see @ref{Spatial vs. Frequency domain}.
-
-Below you can see an example of the second scenario: you want to see how
-variation of the growth level (through the @option{--detgrowquant} option)
-will affect the final result. Recall that you can ignore all the extra
-spaces, new lines, and backslash's (`@code{\}') if you are typing in the
-terminal. In a shell script, remove the @code{$} signs at the start of the
-lines.
+Use this file as the convolved image and don't do convolution (ignore 
@option{--kernel}).
+NoiseChisel will just check the size of the given dataset is the same as the 
input's size.
+If a wrong image (with the same size) is given to this option, the results 
(errors, bugs, etc) are unpredictable.
+So please use this option with care and in a highly controlled environment, 
for example in the scenario discussed below.
+
+In almost all situations, as the input gets larger, the single most CPU (and 
time) consuming step in NoiseChisel (and other programs that need a convolved 
image) is convolution.
+Therefore minimizing the number of convolutions can save a significant amount 
of time in some scenarios.
+One such scenario is when you want to segment NoiseChisel's detections using 
the same kernel (with @ref{Segment}, which also supports this 
@option{--convolved} option).
+This scenario would require two convolutions of the same dataset: once by 
NoiseChisel and once by Segment.
+Using this option in both programs, only one convolution (prior to running 
NoiseChisel) is enough.
+
+Another common scenario where this option can be convenient is when you are 
testing NoiseChisel (or Segment) for the best parameters.
+You have to run NoiseChisel multiple times and see the effect of each change.
+However, once you are happy with the kernel, re-convolving the input on every 
change of higher-level parameters will greatly hinder, or discourage, further 
testing.
+With this option, you can convolve the input image with your chosen kernel 
once before running NoiseChisel, then feed it to NoiseChisel on each test run 
and thus save valuable time for better/more tests.
+
+To build your desired convolution kernel, you can use @ref{MakeProfiles}.
+To convolve the image with a given kernel you can use @ref{Convolve}.
+Spatial domain convolution is mandatory: in the frequency domain, blank pixels 
(if present) will cover the whole image and gradients will appear on the edges, 
see @ref{Spatial vs. Frequency domain}.
+
+Below you can see an example of the second scenario: you want to see how 
variation of the growth level (through the @option{--detgrowquant} option) will 
affect the final result.
+Recall that you can ignore all the extra spaces, new lines, and backslash's 
(`@code{\}') if you are typing in the terminal.
+In a shell script, remove the @code{$} signs at the start of the lines.
 
 @example
 ## Make the kernel to convolve with.
@@ -13515,426 +12996,292 @@ $ for g in 60 65 70 75 80 85 90; do                 
         \
 
 
 @item --chdu=STR
-The HDU/extension containing the convolved image in the file given to
-@option{--convolved}.
+The HDU/extension containing the convolved image in the file given to 
@option{--convolved}.
 
 @item -w STR
 @itemx --widekernel=STR
-File name of a wider kernel to use in estimating the difference of the mode
-and median in a tile (this difference is used to identify the significance
-of signal in that tile, see @ref{Quantifying signal in a tile}). As
-displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa [2015]}, a wider kernel will help in identifying the skewness
-caused by data in noise. The image that is convolved with this kernel is
-@emph{only} used for this purpose. Once the mode is found to be
-sufficiently close to the median, the quantile threshold is found on the
-image convolved with the sharper kernel (@option{--kernel}), see
-@option{--qthresh}).
-
-Since convolution will significantly slow down the processing, this feature
-is optional. When it isn't given, the image that is convolved with
-@option{--kernel} will be used to identify good tiles @emph{and} apply the
-quantile threshold. This option is mainly useful in conditions were you
-have a very large, extended, diffuse signal that is still present in the
-usable tiles when using @option{--kernel}. See @ref{Detecting large
-extended targets} for a practical demonstration on how to inspect the tiles
-used in identifying the quantile threshold.
+File name of a wider kernel to use in estimating the difference of the mode 
and median in a tile (this difference is used to identify the significance of 
signal in that tile, see @ref{Quantifying signal in a tile}).
+As displayed in Figure 4 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}, a wider kernel will help in identifying the skewness 
caused by data in noise.
+The image that is convolved with this kernel is @emph{only} used for this 
purpose.
+Once the mode is found to be sufficiently close to the median, the quantile 
threshold is found on the image convolved with the sharper kernel 
(@option{--kernel}), see @option{--qthresh}).
+
+Since convolution will significantly slow down the processing, this feature is 
optional.
+When it isn't given, the image that is convolved with @option{--kernel} will 
be used to identify good tiles @emph{and} apply the quantile threshold.
+This option is mainly useful in conditions were you have a very large, 
extended, diffuse signal that is still present in the usable tiles when using 
@option{--kernel}.
+See @ref{Detecting large extended targets} for a practical demonstration on 
how to inspect the tiles used in identifying the quantile threshold.
 
 @item --whdu=STR
 HDU containing the kernel file given to the @option{--widekernel} option.
 
 @item -L INT[,INT]
 @itemx --largetilesize=INT[,INT]
-The size of each tile for the tessellation with the larger tile
-sizes. Except for the tile size, all the other parameters for this
-tessellation are taken from the common options described in @ref{Processing
-options}. The format is identical to that of the @option{--tilesize} option
-that is discussed in that section.
+The size of each tile for the tessellation with the larger tile sizes.
+Except for the tile size, all the other parameters for this tessellation are 
taken from the common options described in @ref{Processing options}.
+The format is identical to that of the @option{--tilesize} option that is 
discussed in that section.
 @end table
 
 @node Detection options, NoiseChisel output, NoiseChisel input, Invoking 
astnoisechisel
 @subsubsection Detection options
 
-Detection is the process of separating the pixels in the image into two
-groups: 1) Signal, and 2) Noise. Through the parameters below, you can
-customize the detection process in NoiseChisel. Recall that you can always
-see the full list of NoiseChisel's options with the @option{--help} (see
-@ref{Getting help}), or @option{--printparams} (or @option{-P}) to see
-their values (see @ref{Operating mode options}).
+Detection is the process of separating the pixels in the image into two 
groups: 1) Signal, and 2) Noise.
+Through the parameters below, you can customize the detection process in 
NoiseChisel.
+Recall that you can always see the full list of NoiseChisel's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -Q FLT
 @itemx --meanmedqdiff=FLT
-The maximum acceptable distance between the quantiles of the mean and
-median in each tile, see @ref{Quantifying signal in a tile}. The quantile
-threshold estimates are measured on tiles where the quantiles of their mean
-and median are less distant than the value given to this option. For
-example @option{--meanmedqdiff=0.01} means that only tiles where the mean's
-quantile is between 0.49 and 0.51 (recall that the median's quantile is
-0.5) will be used.
+The maximum acceptable distance between the quantiles of the mean and median 
in each tile, see @ref{Quantifying signal in a tile}.
+The quantile threshold estimates are measured on tiles where the quantiles of 
their mean and median are less distant than the value given to this option.
+For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --outliersclip=FLT,FLT
-@mymath{\sigma}-clipping parameters for the outlier rejection of the
-quantile threshold. The format of the given values is similar to
-@option{--sigmaclip} below. In NoiseChisel, outlier rejection on tiles is
-used when identifying the quantile thresholds (@option{--qthresh},
-@option{--noerodequant}, and @option{detgrowquant}).
-
-Outlier rejection is useful when the dataset contains a large and diffuse
-(almost flat within each tile) signal. The flatness of the profile will
-cause it to successfully pass the mean-median quantile difference test, so
-we'll need to use the distribution of successful tiles for removing these
-false positives. For more, see the latter half of @ref{Quantifying signal
-in a tile}.
+@mymath{\sigma}-clipping parameters for the outlier rejection of the quantile 
threshold.
+The format of the given values is similar to @option{--sigmaclip} below.
+In NoiseChisel, outlier rejection on tiles is used when identifying the 
quantile thresholds (@option{--qthresh}, @option{--noerodequant}, and 
@option{detgrowquant}).
+
+Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positives.
+For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
-Multiple of sigma to define an outlier. If this option is given a value of
-zero, no outlier rejection will take place. For more see
-@option{--outliersclip} and the latter half of @ref{Quantifying signal in a
-tile}.
+Multiple of sigma to define an outlier.
+If this option is given a value of zero, no outlier rejection will take place.
+For more see @option{--outliersclip} and the latter half of @ref{Quantifying 
signal in a tile}.
 
 @item -t FLT
 @itemx --qthresh=FLT
-The quantile threshold to apply to the convolved image. The detection
-process begins with applying a quantile threshold to each of the tiles in
-the small tessellation. The quantile is only calculated for tiles that
-don't have any significant signal within them, see @ref{Quantifying signal
-in a tile}. Interpolation is then used to give a value to the unsuccessful
-tiles and it is finally smoothed.
+The quantile threshold to apply to the convolved image.
+The detection process begins with applying a quantile threshold to each of the 
tiles in the small tessellation.
+The quantile is only calculated for tiles that don't have any significant 
signal within them, see @ref{Quantifying signal in a tile}.
+Interpolation is then used to give a value to the unsuccessful tiles and it is 
finally smoothed.
 
 @cindex Quantile
 @cindex Binary image
 @cindex Foreground pixels
 @cindex Background pixels
-The quantile value is a floating point value between 0 and 1. Assume that
-we have sorted the @mymath{N} data elements of a distribution (the pixels
-in each mesh on the convolved image). The quantile (@mymath{q}) of this
-distribution is the value of the element with an index of (the nearest
-integer to) @mymath{q\times{N}} in the sorted data set. After thresholding
-is complete, we will have a binary (two valued) image. The pixels above the
-threshold are known as foreground pixels (have a value of 1) while those
-which lie below the threshold are known as background (have a value of 0).
+The quantile value is a floating point value between 0 and 1.
+Assume that we have sorted the @mymath{N} data elements of a distribution (the 
pixels in each mesh on the convolved image).
+The quantile (@mymath{q}) of this distribution is the value of the element 
with an index of (the nearest integer to) @mymath{q\times{N}} in the sorted 
data set.
+After thresholding is complete, we will have a binary (two valued) image.
+The pixels above the threshold are known as foreground pixels (have a value of 
1) while those which lie below the threshold are known as background (have a 
value of 0).
 
 @item --smoothwidth=INT
-Width of flat kernel used to smooth the interpolated quantile thresholds,
-see @option{--qthresh} for more.
+Width of flat kernel used to smooth the interpolated quantile thresholds, see 
@option{--qthresh} for more.
 
 @cindex NaN
 @item --checkqthresh
-Check the quantile threshold values on the mesh grid. A file suffixed with
-@file{_qthresh.fits} will be created showing each step. With this option,
-NoiseChisel will abort as soon as quantile estimation has been completed,
-allowing you to inspect the steps leading to the final quantile threshold,
-this can be disabled with @option{--continueaftercheck}. By default the
-output will have the same pixel size as the input, but with the
-@option{--oneelempertile} option, only one pixel will be used for each tile
-(see @ref{Processing options}).
+Check the quantile threshold values on the mesh grid.
+A file suffixed with @file{_qthresh.fits} will be created showing each step.
+With this option, NoiseChisel will abort as soon as quantile estimation has 
been completed, allowing you to inspect the steps leading to the final quantile 
threshold, this can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @item --blankasforeground
-In the erosion and opening steps below, treat blank elements as foreground
-(regions above the threshold). By default, blank elements in the dataset
-are considered to be background, so if a foreground pixel is touching it,
-it will be eroded. This option is irrelevant if the datasets contains no
-blank elements.
+In the erosion and opening steps below, treat blank elements as foreground 
(regions above the threshold).
+By default, blank elements in the dataset are considered to be background, so 
if a foreground pixel is touching it, it will be eroded.
+This option is irrelevant if the datasets contains no blank elements.
 
-When there are many blank elements in the dataset, treating them as
-foreground will systematically erode their regions less, therefore
-systematically creating more false positives. So use this option (when
-blank values are present) with care.
+When there are many blank elements in the dataset, treating them as foreground 
will systematically erode their regions less, therefore systematically creating 
more false positives.
+So use this option (when blank values are present) with care.
 
 @item -e INT
 @itemx --erode=INT
 @cindex Erosion
-The number of erosions to apply to the binary thresholded image. Erosion is
-simply the process of flipping (from 1 to 0) any of the foreground pixels
-that neighbor a background pixel. In a 2D image, there are two kinds of
-neighbors, 4-connected and 8-connected neighbors. You can specify which
-type of neighbors should be used for erosion with the @option{--erodengb}
-option, see below.
-
-Erosion has the effect of shrinking the foreground pixels. To put it
-another way, it expands the holes. This is a founding principle in
-NoiseChisel: it exploits the fact that with very low thresholds, the
-holes in the very low surface brightness regions of an image will be
-smaller than regions that have no signal. Therefore by expanding those
-holes, we are able to separate the regions harboring signal.
+The number of erosions to apply to the binary thresholded image.
+Erosion is simply the process of flipping (from 1 to 0) any of the foreground 
pixels that neighbor a background pixel.
+In a 2D image, there are two kinds of neighbors, 4-connected and 8-connected 
neighbors.
+You can specify which type of neighbors should be used for erosion with the 
@option{--erodengb} option, see below.
+
+Erosion has the effect of shrinking the foreground pixels.
+To put it another way, it expands the holes.
+This is a founding principle in NoiseChisel: it exploits the fact that with 
very low thresholds, the holes in the very low surface brightness regions of an 
image will be smaller than regions that have no signal.
+Therefore by expanding those holes, we are able to separate the regions 
harboring signal.
 
 @item --erodengb=INT
-The type of neighborhood (structuring element) used in erosion, see
-@option{--erode} for an explanation on erosion. Only two integer values are
-acceptable: 4 or 8. In 4-connectivity, the neighbors of a pixel are defined
-as the four pixels on the top, bottom, right and left of a pixel that share
-an edge with it. The 8-connected neighbors on the other hand include the
-4-connected neighbors along with the other 4 pixels that share a corner
-with this pixel. See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015)
-for a demonstration.
+The type of neighborhood (structuring element) used in erosion, see 
@option{--erode} for an explanation on erosion.
+Only two integer values are acceptable: 4 or 8.
+In 4-connectivity, the neighbors of a pixel are defined as the four pixels on 
the top, bottom, right and left of a pixel that share an edge with it.
+The 8-connected neighbors on the other hand include the 4-connected neighbors 
along with the other 4 pixels that share a corner with this pixel.
+See Figure 6 (a) and (b) in Akhlaghi and Ichikawa (2015) for a demonstration.
 
 @item --noerodequant
-Pure erosion is going to carve off sharp and small objects completely out
-of the detected regions. This option can be used to avoid missing such
-sharp and small objects (which have significant pixels, but not over a
-large area). All pixels with a value larger than the significance level
-specified by this option will not be eroded during the erosion step
-above. However, they will undergo the erosion and dilation of the opening
-step below.
-
-Like the @option{--qthresh} option, the significance level is
-determined using the quantile (a value between 0 and 1). Just as a
-reminder, in the normal distribution, @mymath{1\sigma}, @mymath{1.5\sigma},
-and @mymath{2\sigma} are approximately on the 0.84, 0.93, and 0.98
-quantiles.
+Pure erosion is going to carve off sharp and small objects completely out of 
the detected regions.
+This option can be used to avoid missing such sharp and small objects (which 
have significant pixels, but not over a large area).
+All pixels with a value larger than the significance level specified by this 
option will not be eroded during the erosion step above.
+However, they will undergo the erosion and dilation of the opening step below.
+
+Like the @option{--qthresh} option, the significance level is determined using 
the quantile (a value between 0 and 1).
+Just as a reminder, in the normal distribution, @mymath{1\sigma}, 
@mymath{1.5\sigma}, and @mymath{2\sigma} are approximately on the 0.84, 0.93, 
and 0.98 quantiles.
 
 @item -p INT
 @itemx --opening=INT
-Depth of opening to be applied to the eroded binary image. Opening is a
-composite operation. When opening a binary image with a depth of
-@mymath{n}, @mymath{n} erosions (explained in @option{--erode}) are
-followed by @mymath{n} dilations. Simply put, dilation is the inverse of
-erosion. When dilating an image any background pixel is flipped (from 0 to
-1) to become a foreground pixel. Dilation has the effect of fattening the
-foreground. Note that in NoiseChisel, the erosion which is part of opening
-is independent of the initial erosion that is done on the thresholded image
-(explained in @option{--erode}). The structuring element for the opening
-can be specified with the @option{--openingngb} option. Opening has the
-effect of removing the thin foreground connections (mostly noise) between
-separate foreground `islands' (detections) thereby completely isolating
-them. Once opening is complete, we have @emph{initial} detections.
+Depth of opening to be applied to the eroded binary image.
+Opening is a composite operation.
+When opening a binary image with a depth of @mymath{n}, @mymath{n} erosions 
(explained in @option{--erode}) are followed by @mymath{n} dilations.
+Simply put, dilation is the inverse of erosion.
+When dilating an image any background pixel is flipped (from 0 to 1) to become 
a foreground pixel.
+Dilation has the effect of fattening the foreground.
+Note that in NoiseChisel, the erosion which is part of opening is independent 
of the initial erosion that is done on the thresholded image (explained in 
@option{--erode}).
+The structuring element for the opening can be specified with the 
@option{--openingngb} option.
+Opening has the effect of removing the thin foreground connections (mostly 
noise) between separate foreground `islands' (detections) thereby completely 
isolating them.
+Once opening is complete, we have @emph{initial} detections.
 
 @item --openingngb=INT
-The structuring element used for opening, see @option{--erodengb} for more
-information about a structuring element.
+The structuring element used for opening, see @option{--erodengb} for more 
information about a structuring element.
 
 @item --skyfracnoblank
-Ignore blank pixels when estimating the fraction of undetected pixels for
-Sky estimation. NoiseChisel only measures the Sky over the tiles that have
-a sufficiently large fraction of undetected pixels (value given to
-@option{--minskyfrac}). By default this fraction is found by dividing
-number of undetected pixels in a tile by the tile's area. But this default
-behavior ignores the possibility of blank pixels. In situations that
-blank/masked pixels are scattered across the image and if they are large
-enough, all the tiles can fail the @option{--minskyfrac} test, thus not
-allowing NoiseChisel to proceed. With this option, such scenarios can be
-fixed: the denominator of the fraction will be the number of non-blank
-elements in the tile, not the total tile area.
+Ignore blank pixels when estimating the fraction of undetected pixels for Sky 
estimation.
+NoiseChisel only measures the Sky over the tiles that have a sufficiently 
large fraction of undetected pixels (value given to @option{--minskyfrac}).
+By default this fraction is found by dividing number of undetected pixels in a 
tile by the tile's area.
+But this default behavior ignores the possibility of blank pixels.
+In situations that blank/masked pixels are scattered across the image and if 
they are large enough, all the tiles can fail the @option{--minskyfrac} test, 
thus not allowing NoiseChisel to proceed.
+With this option, such scenarios can be fixed: the denominator of the fraction 
will be the number of non-blank elements in the tile, not the total tile area.
 
 @item -B FLT
 @itemx --minskyfrac=FLT
-Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a
-tile. Only tiles with a fraction of undetected pixels (Sky) larger than
-this value will be used to estimate the Sky value. NoiseChisel uses this
-option value twice to estimate the Sky value: after initial detections and
-in the end when false detections have been removed.
-
-Because of the PSF and their intrinsic amorphous properties, astronomical
-objects (except cosmic rays) never have a clear cutoff and commonly sink
-into the noise very slowly. Even below the very low thresholds used by
-NoiseChisel. So when a large fraction of the area of one mesh is covered by
-detections, it is very plausible that their faint wings are present in the
-undetected regions (hence causing a bias in any measurement). To get an
-accurate measurement of the above parameters over the tessellation, tiles
-that harbor too many detected regions should be excluded. The used tiles
-are visible in the respective @option{--check} option of the given step.
+Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a tile.
+Only tiles with a fraction of undetected pixels (Sky) larger than this value 
will be used to estimate the Sky value.
+NoiseChisel uses this option value twice to estimate the Sky value: after 
initial detections and in the end when false detections have been removed.
+
+Because of the PSF and their intrinsic amorphous properties, astronomical 
objects (except cosmic rays) never have a clear cutoff and commonly sink into 
the noise very slowly.
+Even below the very low thresholds used by NoiseChisel.
+So when a large fraction of the area of one mesh is covered by detections, it 
is very plausible that their faint wings are present in the undetected regions 
(hence causing a bias in any measurement).
+To get an accurate measurement of the above parameters over the tessellation, 
tiles that harbor too many detected regions should be excluded.
+The used tiles are visible in the respective @option{--check} option of the 
given step.
 
 @item --checkdetsky
-Check the initial approximation of the sky value and its standard deviation
-in a FITS file ending with @file{_detsky.fits}. With this option,
-NoiseChisel will abort as soon as the sky value used for defining
-pseudo-detections is complete. This allows you to inspect the steps leading
-to the final quantile threshold, this behavior can be disabled with
-@option{--continueaftercheck}. By default the output will have the same
-pixel size as the input, but with the @option{--oneelempertile} option,
-only one pixel will be used for each tile (see @ref{Processing options}).
+Check the initial approximation of the sky value and its standard deviation in 
a FITS file ending with @file{_detsky.fits}.
+With this option, NoiseChisel will abort as soon as the sky value used for 
defining pseudo-detections is complete.
+This allows you to inspect the steps leading to the final quantile threshold, 
this behavior can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @item -s FLT,FLT
 @itemx --sigmaclip=FLT,FLT
-The @mymath{\sigma}-clipping parameters for measuring the initial and final
-Sky values from the undetected pixels, see @ref{Sigma clipping}.
-
-This option takes two values which are separated by a comma (@key{,}). Each
-value can either be written as a single number or as a fraction of two
-numbers (for example @code{3,1/10}). The first value to this option is the
-multiple of @mymath{\sigma} that will be clipped (@mymath{\alpha} in that
-section). The second value is the exit criteria. If it is less than 1, then
-it is interpreted as tolerance and if it is larger than one it is assumed
-to be the fixed number of iterations. Hence, in the latter case the value
-must be an integer.
+The @mymath{\sigma}-clipping parameters for measuring the initial and final 
Sky values from the undetected pixels, see @ref{Sigma clipping}.
+
+This option takes two values which are separated by a comma (@key{,}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
+The second value is the exit criteria.
+If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is assumed to be the fixed number of iterations.
+Hence, in the latter case the value must be an integer.
 
 @item -R FLT
 @itemx --dthresh=FLT
-The detection threshold: a multiple of the initial Sky standard deviation
-added with the initial Sky approximation (which you can inspect with
-@option{--checkdetsky}). This flux threshold is applied to the initially
-undetected regions on the unconvolved image. The background pixels that are
-completely engulfed in a 4-connected foreground region are converted to
-background (holes are filled) and one opening (depth of 1) is applied over
-both the initially detected and undetected regions. The Signal to noise
-ratio of the resulting `pseudo-detections' are used to identify true
-vs. false detections. See Section 3.1.5 and Figure 7 in Akhlaghi and
-Ichikawa (2015) for a very complete explanation.
+The detection threshold: a multiple of the initial Sky standard deviation 
added with the initial Sky approximation (which you can inspect with 
@option{--checkdetsky}).
+This flux threshold is applied to the initially undetected regions on the 
unconvolved image.
+The background pixels that are completely engulfed in a 4-connected foreground 
region are converted to background (holes are filled) and one opening (depth of 
1) is applied over both the initially detected and undetected regions.
+The Signal to noise ratio of the resulting `pseudo-detections' are used to 
identify true vs. false detections.
+See Section 3.1.5 and Figure 7 in Akhlaghi and Ichikawa (2015) for a very 
complete explanation.
 
 @item --dopening=INT
 The number of openings to do after applying @option{--dthresh}.
 
 @item --dopeningngb=INT
-The connectivity used in the opening of @option{--dopening}. In a 2D image
-this must be either 4 or 8. The stronger the connectivity, the more smaller
-regions will be discarded.
+The connectivity used in the opening of @option{--dopening}.
+In a 2D image this must be either 4 or 8.
+The stronger the connectivity, the more smaller regions will be discarded.
 
 @item --holengb=INT
-The connectivity (defined by the number of neighbors) to fill holes after
-applying @option{--dthresh} (above) to find pseudo-detections. For example
-in a 2D image it must be 4 (the neighbors that are most strongly connected)
-or 8 (all neighbors). The stronger the connectivity, the stronger the hole
-will be enclosed. So setting a value of 8 in a 2D image means that the
-walls of the hole are 4-connected. If standard (near Sky level) values are
-given to @option{--dthresh}, setting @option{--holengb=4}, might fill the
-complete dataset and thus not create enough pseudo-detections.
+The connectivity (defined by the number of neighbors) to fill holes after 
applying @option{--dthresh} (above) to find pseudo-detections.
+For example in a 2D image it must be 4 (the neighbors that are most strongly 
connected) or 8 (all neighbors).
+The stronger the connectivity, the stronger the hole will be enclosed.
+So setting a value of 8 in a 2D image means that the walls of the hole are 
4-connected.
+If standard (near Sky level) values are given to @option{--dthresh}, setting 
@option{--holengb=4}, might fill the complete dataset and thus not create 
enough pseudo-detections.
 
 @item --pseudoconcomp=INT
-The connectivity (defined by the number of neighbors) to find individual
-pseudo-detections. If it is a weaker connectivity (4 in a 2D image), then
-pseudo-detections that are connected on the corners will be treated as
-separate.
+The connectivity (defined by the number of neighbors) to find individual 
pseudo-detections.
+If it is a weaker connectivity (4 in a 2D image), then pseudo-detections that 
are connected on the corners will be treated as separate.
 
 @item -m INT
 @itemx --snminarea=INT
-The minimum area to calculate the Signal to noise ratio on the
-pseudo-detections of both the initially detected and undetected
-regions. When the area in a pseudo-detection is too small, the Signal to
-noise ratio measurements will not be accurate and their distribution will
-be heavily skewed to the positive. So it is best to ignore any
-pseudo-detection that is smaller than this area. Use
-@option{--detsnhistnbins} to check if this value is reasonable or not.
+The minimum area to calculate the Signal to noise ratio on the 
pseudo-detections of both the initially detected and undetected regions.
+When the area in a pseudo-detection is too small, the Signal to noise ratio 
measurements will not be accurate and their distribution will be heavily skewed 
to the positive.
+So it is best to ignore any pseudo-detection that is smaller than this area.
+Use @option{--detsnhistnbins} to check if this value is reasonable or not.
 
 @item --checksn
-Save the S/N values of the pseudo-detections (and possibly grown detections
-if @option{--cleangrowndet} is called) into separate tables. If
-@option{--tableformat} is a FITS table, each table will be written into a
-separate extension of one file suffixed with @file{_detsn.fits}. If it is
-plain text, a separate file will be made for each table (ending in
-@file{_detsn_sky.txt}, @file{_detsn_det.txt} and
-@file{_detsn_grown.txt}). For more on @option{--tableformat} see @ref{Input
-output options}.
-
-You can use these to inspect the S/N values and their distribution (in
-combination with the @option{--checkdetection} option to see where the
-pseudo-detections are).  You can use Gnuastro's @ref{Statistics} to make a
-histogram of the distribution or any other analysis you would like for
-better understanding of the distribution (for example through a histogram).
+Save the S/N values of the pseudo-detections (and possibly grown detections if 
@option{--cleangrowndet} is called) into separate tables.
+If @option{--tableformat} is a FITS table, each table will be written into a 
separate extension of one file suffixed with @file{_detsn.fits}.
+If it is plain text, a separate file will be made for each table (ending in 
@file{_detsn_sky.txt}, @file{_detsn_det.txt} and @file{_detsn_grown.txt}).
+For more on @option{--tableformat} see @ref{Input output options}.
+
+You can use these to inspect the S/N values and their distribution (in 
combination with the @option{--checkdetection} option to see where the 
pseudo-detections are).
+You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution or any other analysis you would like for better understanding of 
the distribution (for example through a histogram).
 
 @item --minnumfalse=INT
-The minimum number of `pseudo-detections' over the undetected regions to
-identify a Signal-to-Noise ratio threshold. The Signal to noise ratio (S/N)
-of false pseudo-detections in each tile is found using the quantile of the
-S/N distribution of the pseudo-detections over the undetected pixels in each
-mesh. If the number of S/N measurements is not large enough, the quantile
-will not be accurate (can have large scatter). For example if you set
-@option{--snquant=0.99} (or the top 1 percent), then it is best to have at
-least 100 S/N measurements.
+The minimum number of `pseudo-detections' over the undetected regions to 
identify a Signal-to-Noise ratio threshold.
+The Signal to noise ratio (S/N) of false pseudo-detections in each tile is 
found using the quantile of the S/N distribution of the pseudo-detections over 
the undetected pixels in each mesh.
+If the number of S/N measurements is not large enough, the quantile will not 
be accurate (can have large scatter).
+For example if you set @option{--snquant=0.99} (or the top 1 percent), then it 
is best to have at least 100 S/N measurements.
 
 @item -c FLT
 @itemx --snquant=FLT
-The quantile of the Signal to noise ratio distribution of the
-pseudo-detections in each mesh to use for filling the large mesh grid. Note
-that this is only calculated for the large mesh grids that satisfy the
-minimum fraction of undetected pixels (value of @option{--minbfrac}) and
-minimum number of pseudo-detections (value of @option{--minnumfalse}).
+The quantile of the Signal to noise ratio distribution of the 
pseudo-detections in each mesh to use for filling the large mesh grid.
+Note that this is only calculated for the large mesh grids that satisfy the 
minimum fraction of undetected pixels (value of @option{--minbfrac}) and 
minimum number of pseudo-detections (value of @option{--minnumfalse}).
 
 @item --snthresh=FLT
-Manually set the signal-to-noise ratio of true pseudo-detections. With this
-option, NoiseChisel will not attempt to find pseudo-detections over the
-noisy regions of the dataset, but will directly go onto applying the
-manually input value.
+Manually set the signal-to-noise ratio of true pseudo-detections.
+With this option, NoiseChisel will not attempt to find pseudo-detections over 
the noisy regions of the dataset, but will directly go onto applying the 
manually input value.
 
-This option is useful in crowded images where there is no blank sky to find
-the sky pseudo-detections. You can get this value on a similarly reduced
-dataset (from another region of the Sky with more undetected regions
-spaces).
+This option is useful in crowded images where there is no blank sky to find 
the sky pseudo-detections.
+You can get this value on a similarly reduced dataset (from another region of 
the Sky with more undetected regions spaces).
 
 @item -d FLT
 @itemx --detgrowquant=FLT
-Quantile limit to ``grow'' the final detections. As discussed in the
-previous options, after applying the initial quantile threshold, layers of
-pixels are carved off the objects to identify true signal. With this step
-you can return those low surface brightness layers that were carved off
-back to the detections. To disable growth, set the value of this option to
-@code{1}.
-
-The process is as follows: after the true detections are found, all the
-non-detected pixels above this quantile will be put in a list and used to
-``grow'' the true detections (seeds of the growth). Like all quantile
-thresholds, this threshold is defined and applied to the convolved
-dataset. Afterwards, the dataset is dilated once (with minimum
-connectivity) to connect very thin regions on the boundary: imagine
-building a dam at the point rivers spill into an open sea/ocean. Finally,
-all holes are filled. In the geography metaphor, holes can be seen as the
-closed (by the dams) rivers and lakes, so this process is like turning the
-water in all such rivers and lakes into soil. See
-@option{--detgrowmaxholesize} for configuring the hole filling.
+Quantile limit to ``grow'' the final detections.
+As discussed in the previous options, after applying the initial quantile 
threshold, layers of pixels are carved off the objects to identify true signal.
+With this step you can return those low surface brightness layers that were 
carved off back to the detections.
+To disable growth, set the value of this option to @code{1}.
+
+The process is as follows: after the true detections are found, all the 
non-detected pixels above this quantile will be put in a list and used to 
``grow'' the true detections (seeds of the growth).
+Like all quantile thresholds, this threshold is defined and applied to the 
convolved dataset.
+Afterwards, the dataset is dilated once (with minimum connectivity) to connect 
very thin regions on the boundary: imagine building a dam at the point rivers 
spill into an open sea/ocean.
+Finally, all holes are filled.
+In the geography metaphor, holes can be seen as the closed (by the dams) 
rivers and lakes, so this process is like turning the water in all such rivers 
and lakes into soil.
+See @option{--detgrowmaxholesize} for configuring the hole filling.
 
 @item --detgrowmaxholesize=INT
-The maximum hole size to fill during the final expansion of the true
-detections as described in @option{--detgrowquant}. This is necessary when
-the input contains many smaller objects and can be used to avoid marking
-blank sky regions as detections.
-
-For example multiple galaxies can be positioned such that they surround an
-empty region of sky. If all the holes are filled, the Sky region in between
-them will be taken as a detection which is not desired. To avoid such
-cases, the integer given to this option must be smaller than the hole
-between such objects. However, we should caution that unless the ``hole''
-is very large, the combined faint wings of the galaxies might actually be
-present in between them, so be very careful in not filling such holes.
-
-On the other hand, if you have a very large (and extended) galaxy, the
-diffuse wings of the galaxy may create very large holes over the
-detections. In such cases, a large enough value to this option will cause
-all such holes to be detected as part of the large galaxy and thus help in
-detecting it to extremely low surface brightness limits. Therefore,
-especially when large and extended objects are present in the image, it is
-recommended to give this option (very) large values. For one real-world
-example, see @ref{Detecting large extended targets}.
+The maximum hole size to fill during the final expansion of the true 
detections as described in @option{--detgrowquant}.
+This is necessary when the input contains many smaller objects and can be used 
to avoid marking blank sky regions as detections.
+
+For example multiple galaxies can be positioned such that they surround an 
empty region of sky.
+If all the holes are filled, the Sky region in between them will be taken as a 
detection which is not desired.
+To avoid such cases, the integer given to this option must be smaller than the 
hole between such objects.
+However, we should caution that unless the ``hole'' is very large, the 
combined faint wings of the galaxies might actually be present in between them, 
so be very careful in not filling such holes.
+
+On the other hand, if you have a very large (and extended) galaxy, the diffuse 
wings of the galaxy may create very large holes over the detections.
+In such cases, a large enough value to this option will cause all such holes 
to be detected as part of the large galaxy and thus help in detecting it to 
extremely low surface brightness limits.
+Therefore, especially when large and extended objects are present in the 
image, it is recommended to give this option (very) large values.
+For one real-world example, see @ref{Detecting large extended targets}.
 
 @item --cleangrowndet
-After dilation, if the signal-to-noise ratio of a detection is less than
-the derived pseudo-detection S/N limit, that detection will be
-discarded. In an ideal/clean noise, a true detection's S/N should be larger
-than its constituent pseudo-detections because its area is larger and it
-also covers more signal. However, on a false detections (especially at
-lower @option{--snquant} values), the increase in size can cause a decrease
-in S/N below that threshold.
-
-This will improve purity and not change completeness (a true detection will
-not be discarded). Because a true detection has flux in its vicinity and
-dilation will catch more of that flux and increase the S/N. So on a true
-detection, the final S/N cannot be less than pseudo-detections.
-
-However, in many real images bad processing creates artifacts that cannot
-be accurately removed by the Sky subtraction. In such cases, this option
-will decrease the completeness (will artificially discard true
-detections). So this feature is not default and should to be explicitly
-called when you know the noise is clean.
+After dilation, if the signal-to-noise ratio of a detection is less than the 
derived pseudo-detection S/N limit, that detection will be discarded.
+In an ideal/clean noise, a true detection's S/N should be larger than its 
constituent pseudo-detections because its area is larger and it also covers 
more signal.
+However, on a false detections (especially at lower @option{--snquant} 
values), the increase in size can cause a decrease in S/N below that threshold.
+
+This will improve purity and not change completeness (a true detection will 
not be discarded).
+Because a true detection has flux in its vicinity and dilation will catch more 
of that flux and increase the S/N.
+So on a true detection, the final S/N cannot be less than pseudo-detections.
+
+However, in many real images bad processing creates artifacts that cannot be 
accurately removed by the Sky subtraction.
+In such cases, this option will decrease the completeness (will artificially 
discard true detections).
+So this feature is not default and should to be explicitly called when you 
know the noise is clean.
 
 
 @item --checkdetection
-Every step of the detection process will be added as an extension to a file
-with the suffix @file{_det.fits}. Going through each would just be a repeat
-of the explanations above and also of those in Akhlaghi and Ichikawa
-(2015). The extension label should be sufficient to recognize which step
-you are observing. Viewing all the steps can be the best guide in choosing
-the best set of parameters. With this option, NoiseChisel will abort as
-soon as a snapshot of all the detection process is saved. This behavior can
-be disabled with @option{--continueaftercheck}.
+Every step of the detection process will be added as an extension to a file 
with the suffix @file{_det.fits}.
+Going through each would just be a repeat of the explanations above and also 
of those in Akhlaghi and Ichikawa (2015).
+The extension label should be sufficient to recognize which step you are 
observing.
+Viewing all the steps can be the best guide in choosing the best set of 
parameters.
+With this option, NoiseChisel will abort as soon as a snapshot of all the 
detection process is saved.
+This behavior can be disabled with @option{--continueaftercheck}.
 
 @item --checksky
-Check the derivation of the final sky and its standard deviation values on
-the mesh grid. With this option, NoiseChisel will abort as soon as the sky
-value is estimated over the image (on each tile). This behavior can be
-disabled with @option{--continueaftercheck}. By default the output will
-have the same pixel size as the input, but with the
-@option{--oneelempertile} option, only one pixel will be used for each tile
-(see @ref{Processing options}).
+Check the derivation of the final sky and its standard deviation values on the 
mesh grid.
+With this option, NoiseChisel will abort as soon as the sky value is estimated 
over the image (on each tile).
+This behavior can be disabled with @option{--continueaftercheck}.
+By default the output will have the same pixel size as the input, but with the 
@option{--oneelempertile} option, only one pixel will be used for each tile 
(see @ref{Processing options}).
 
 @end table
 
@@ -13945,136 +13292,82 @@ have the same pixel size as the input, but with the
 @node NoiseChisel output,  , Detection options, Invoking astnoisechisel
 @subsubsection NoiseChisel output
 
-NoiseChisel's output is a multi-extension FITS file. The main
-extension/dataset is a (binary) detection map. It has the same size as the
-input but with only two possible values for all pixels: 0 (for pixels
-identified as noise) and 1 (for those identified as signal/detections). The
-detection map is followed by a Sky and Sky standard deviation dataset
-(which are calculated from the binary image). By default (when
-@option{--rawoutput} isn't called), NoiseChisel will also subtract the Sky
-value from the input and save the sky-subtracted input as the first
-extension in the output with data. The zero-th extension (that contains no
-data), contains NoiseChisel's configuration as FITS keywords, see
-@ref{Output FITS files}.
+NoiseChisel's output is a multi-extension FITS file.
+The main extension/dataset is a (binary) detection map.
+It has the same size as the input but with only two possible values for all 
pixels: 0 (for pixels identified as noise) and 1 (for those identified as 
signal/detections).
+The detection map is followed by a Sky and Sky standard deviation dataset 
(which are calculated from the binary image).
+By default (when @option{--rawoutput} isn't called), NoiseChisel will also 
subtract the Sky value from the input and save the sky-subtracted input as the 
first extension in the output with data.
+The zero-th extension (that contains no data), contains NoiseChisel's 
configuration as FITS keywords, see @ref{Output FITS files}.
+
+The name of the output file can be set by giving a value to @option{--output} 
(this is a common option between all programs and is therefore discussed in 
@ref{Input output options}).
+If @option{--output} isn't used, the input name will be suffixed with 
@file{_detected.fits} and used as output, see @ref{Automatic output}.
+If any of the options starting with @option{--check*} are given, NoiseChisel 
won't complete and will abort as soon as the respective check images are 
created.
+For more information on the different check images, see the description for 
the @option{--check*} options in @ref{Detection options} (this can be disabled 
with @option{--continueaftercheck}).
 
-The name of the output file can be set by giving a value to
-@option{--output} (this is a common option between all programs and is
-therefore discussed in @ref{Input output options}). If @option{--output}
-isn't used, the input name will be suffixed with @file{_detected.fits} and
-used as output, see @ref{Automatic output}. If any of the options starting
-with @option{--check*} are given, NoiseChisel won't complete and will abort
-as soon as the respective check images are created. For more information on
-the different check images, see the description for the @option{--check*}
-options in @ref{Detection options} (this can be disabled with
-@option{--continueaftercheck}).
-
-The last two extensions of the output are the Sky and its Standard
-deviation, see @ref{Sky value} for a complete explanation. They are
-calculated on the tile grid that you defined for NoiseChisel. By default
-these datasets will have the same size as the input, but with all the
-pixels in one tile given one value. To be more space-efficient (keep only
-one pixel per tile), you can use the @option{--oneelempertile} option, see
-@ref{Tessellation}.
+The last two extensions of the output are the Sky and its Standard deviation, 
see @ref{Sky value} for a complete explanation.
+They are calculated on the tile grid that you defined for NoiseChisel.
+By default these datasets will have the same size as the input, but with all 
the pixels in one tile given one value.
+To be more space-efficient (keep only one pixel per tile), you can use the 
@option{--oneelempertile} option, see @ref{Tessellation}.
 
 @cindex GNOME
-To inspect any of NoiseChisel's output files, assuming you use SAO DS9, you
-can configure your Graphic User Interface (GUI) to open NoiseChisel's
-output as a multi-extension data cube. This will allow you to flip through
-the different extensions and visually inspect the results. This process has
-been described for the GNOME GUI (most common GUI in GNU/Linux operating
-systems) in @ref{Viewing multiextension FITS images}.
+To inspect any of NoiseChisel's output files, assuming you use SAO DS9, you 
can configure your Graphic User Interface (GUI) to open NoiseChisel's output as 
a multi-extension data cube.
+This will allow you to flip through the different extensions and visually 
inspect the results.
+This process has been described for the GNOME GUI (most common GUI in 
GNU/Linux operating systems) in @ref{Viewing multiextension FITS images}.
 
 NoiseChisel's output configuration options are described in detail below.
 
 @table @option
 @item --continueaftercheck
-Continue NoiseChisel after any of the options starting with
-@option{--check} (see @ref{Detection options}. NoiseChisel involves many
-steps and as a result, there are many checks, allowing you to inspect the
-status of the processing. The results of each step affect the next steps of
-processing. Therefore, when you want to check the status of the processing
-at one step, the time spent to complete NoiseChisel is just
-wasted/distracting time.
-
-To encourage easier experimentation with the option values, when you use
-any of the NoiseChisel options that start with @option{--check},
-NoiseChisel will abort once its desired extensions have been written. With
-@option{--continueaftercheck} option, you can disable this behavior and ask
-NoiseChisel to continue with the rest of the processing, even after the
-requested check files are complete.
+Continue NoiseChisel after any of the options starting with @option{--check} 
(see @ref{Detection options}.
+NoiseChisel involves many steps and as a result, there are many checks, 
allowing you to inspect the status of the processing.
+The results of each step affect the next steps of processing.
+Therefore, when you want to check the status of the processing at one step, 
the time spent to complete NoiseChisel is just wasted/distracting time.
+
+To encourage easier experimentation with the option values, when you use any 
of the NoiseChisel options that start with @option{--check}, NoiseChisel will 
abort once its desired extensions have been written.
+With @option{--continueaftercheck} option, you can disable this behavior and 
ask NoiseChisel to continue with the rest of the processing, even after the 
requested check files are complete.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for
-example Sky and Sky standard deviation extensions of the output). This is
-only applicable when the tiled output has the same size as the input, in
-other words, when @option{--oneelempertile} isn't called.
+Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
 
-By default, blank values in the input (commonly on the edges which are
-outside the survey/field area) will be set to blank in the tiled outputs
-also. But in other scenarios this default behavior is not desired: for
-example if you have masked something in the input, but want the tiled
-output under that also.
+By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
+But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item -l
 @itemx --label
-Run a connected-components algorithm on the finally detected pixels to
-identify which pixels are connected to which. By default the main output is
-a binary dataset with only two values: 0 (for noise) and 1 (for
-signal/detections). See @ref{NoiseChisel output} for more.
-
-The purpose of NoiseChisel is to detect targets that are extended and
-diffuse, with outer parts that sink into the noise very gradually (galaxies
-and stars for example). Since NoiseChisel digs down to extremely low
-surface brightness values, many such targets will commonly be detected
-together as a single large body of connected pixels.
-
-To properly separate connected objects, sophisticated segmentation methods
-are commonly necessary on NoiseChisel's output. Gnuastro has the dedicated
-@ref{Segment} program for this job. Since input images are commonly large
-and can take a significant volume, the extra volume necessary to store the
-labels of the connected components in the detection map (which will be
-created with this @option{--label} option, in 32-bit signed integer type)
-can thus be a major waste of space. Since the default output is just a
-binary dataset, an 8-bit unsigned dataset is enough.
-
-The binary output will also encourage users to segment the result
-separately prior to doing higher-level analysis. As an alternative to
-@option{--label}, if you have the binary detection image, you can use the
-@code{connected-components} operator in Gnuastro's Arithmetic program to
-identify regions that are connected with each other. For example with this
-command (assuming NoiseChisel's output is called @file{nc.fits}):
+Run a connected-components algorithm on the finally detected pixels to 
identify which pixels are connected to which.
+By default the main output is a binary dataset with only two values: 0 (for 
noise) and 1 (for signal/detections).
+See @ref{NoiseChisel output} for more.
+
+The purpose of NoiseChisel is to detect targets that are extended and diffuse, 
with outer parts that sink into the noise very gradually (galaxies and stars 
for example).
+Since NoiseChisel digs down to extremely low surface brightness values, many 
such targets will commonly be detected together as a single large body of 
connected pixels.
+
+To properly separate connected objects, sophisticated segmentation methods are 
commonly necessary on NoiseChisel's output.
+Gnuastro has the dedicated @ref{Segment} program for this job.
+Since input images are commonly large and can take a significant volume, the 
extra volume necessary to store the labels of the connected components in the 
detection map (which will be created with this @option{--label} option, in 
32-bit signed integer type) can thus be a major waste of space.
+Since the default output is just a binary dataset, an 8-bit unsigned dataset 
is enough.
+
+The binary output will also encourage users to segment the result separately 
prior to doing higher-level analysis.
+As an alternative to @option{--label}, if you have the binary detection image, 
you can use the @code{connected-components} operator in Gnuastro's Arithmetic 
program to identify regions that are connected with each other.
+For example with this command (assuming NoiseChisel's output is called 
@file{nc.fits}):
 
 @example
 $ astarithmetic nc.fits 2 connected-components -hDETECTIONS
 @end example
 
 @item --rawoutput
-Don't include the Sky-subtracted input image as the first extension of the
-output. By default, the Sky-subtracted input is put in the first extension
-of the output. The next extensions are NoiseChisel's main outputs described
-above.
+Don't include the Sky-subtracted input image as the first extension of the 
output.
+By default, the Sky-subtracted input is put in the first extension of the 
output.
+The next extensions are NoiseChisel's main outputs described above.
+
+The extra Sky-subtracted input can be convenient in checking NoiseChisel's 
output and comparing the detection map with the input: visually see if 
everything you expected is detected (reasonable completeness) and that you 
don't have too many false detections (reasonable purity).
+This visual inspection is simplified if you use SAO DS9 to view NoiseChisel's 
output as a multi-extension data-cube, see @ref{Viewing multiextension FITS 
images}.
 
-The extra Sky-subtracted input can be convenient in checking NoiseChisel's
-output and comparing the detection map with the input: visually see if
-everything you expected is detected (reasonable completeness) and that you
-don't have too many false detections (reasonable purity). This visual
-inspection is simplified if you use SAO DS9 to view NoiseChisel's output as
-a multi-extension data-cube, see @ref{Viewing multiextension FITS images}.
-
-When you are satisfied with your NoiseChisel configuration (therefore you
-don't need to check on every run), or you want to archive/transfer the
-outputs, or the datasets become large, or you are running NoiseChisel as
-part of a pipeline, this Sky-subtracted input image can be a significant
-burden (take up a large volume). The fact that the input is also noisy,
-makes it hard to compress it efficiently.
-
-In such cases, this @option{--rawoutput} can be used to avoid the extra
-sky-subtracted input in the output. It is always possible to easily produce
-the Sky-subtracted dataset from the input (assuming it is in extension
-@code{1} of @file{in.fits}) and the @code{SKY} extension of NoiseChisel's
-output (let's call it @file{nc.fits}) with a command like below (assuming
-NoiseChisel wasn't run with @option{--oneelempertile}, see
-@ref{Tessellation}):
+When you are satisfied with your NoiseChisel configuration (therefore you 
don't need to check on every run), or you want to archive/transfer the outputs, 
or the datasets become large, or you are running NoiseChisel as part of a 
pipeline, this Sky-subtracted input image can be a significant burden (take up 
a large volume).
+The fact that the input is also noisy, makes it hard to compress it 
efficiently.
+
+In such cases, this @option{--rawoutput} can be used to avoid the extra 
sky-subtracted input in the output.
+It is always possible to easily produce the Sky-subtracted dataset from the 
input (assuming it is in extension @code{1} of @file{in.fits}) and the 
@code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) 
with a command like below (assuming NoiseChisel wasn't run with 
@option{--oneelempertile}, see @ref{Tessellation}):
 
 @example
 $ astarithmetic in.fits nc.fits - -h1 -hSKY
@@ -14084,14 +13377,10 @@ $ astarithmetic in.fits nc.fits - -h1 -hSKY
 @cartouche
 @noindent
 @cindex Compression
-@strong{Save space:} with the @option{--rawoutput} and
-@option{--oneelempertile}, NoiseChisel's output will only be one binary
-detection map and two much smaller arrays with one value per tile. Since
-none of these have noise they can be compressed very effectively (without
-any loss of data) with exceptionally high compression ratios. This makes it
-easy to archive, or transfer, NoiseChisel's output even on huge
-datasets. To compress it with the most efficient method (take up less
-volume), run the following command:
+@strong{Save space:} with the @option{--rawoutput} and 
@option{--oneelempertile}, NoiseChisel's output will only be one binary 
detection map and two much smaller arrays with one value per tile.
+Since none of these have noise they can be compressed very effectively 
(without any loss of data) with exceptionally high compression ratios.
+This makes it easy to archive, or transfer, NoiseChisel's output even on huge 
datasets.
+To compress it with the most efficient method (take up less volume), run the 
following command:
 
 @cindex GNU Gzip
 @example
@@ -14099,10 +13388,8 @@ $ gzip --best noisechisel_output.fits
 @end example
 
 @noindent
-The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's
-programs directly, or viewed in viewers like SAO DS9, without having to
-decompress it separately (they will just take a little longer, because they
-have to internally decompress it before starting).
+The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's 
programs directly, or viewed in viewers like SAO DS9, without having to 
decompress it separately (they will just take a little longer, because they 
have to internally decompress it before starting).
+See @ref{NoiseChisel optimization for storage} for an example on a real 
dataset.
 @end cartouche
 
 
@@ -14117,27 +13404,16 @@ have to internally decompress it before starting).
 @node Segment, MakeCatalog, NoiseChisel, Data analysis
 @section Segment
 
-Once signal is separated from noise (for example with @ref{NoiseChisel}),
-you have a binary dataset: each pixel is either signal (1) or noise
-(0). Signal (for example every galaxy in your image) has been ``detected'',
-but all detections have a label of 1. Therefore while we know which pixels
-contain signal, we still can't find out how many galaxies they contain or
-which detected pixels correspond to which galaxy. At the lowest (most
-generic) level, detection is a kind of segmentation (segmenting the
-whole dataset into signal and noise, see @ref{NoiseChisel}). Here, we'll
-define segmentation only on signal: to separate and find sub-structure
-within the detections.
+Once signal is separated from noise (for example with @ref{NoiseChisel}), you 
have a binary dataset: each pixel is either signal (1) or noise (0).
+Signal (for example every galaxy in your image) has been ``detected'', but all 
detections have a label of 1.
+Therefore while we know which pixels contain signal, we still can't find out 
how many galaxies they contain or which detected pixels correspond to which 
galaxy.
+At the lowest (most generic) level, detection is a kind of segmentation 
(segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
+Here, we'll define segmentation only on signal: to separate and find 
sub-structure within the detections.
 
 @cindex Connected component labeling
-If the targets are clearly separated, or their detected regions aren't
-touching, a simple connected
-components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
-algorithm (very basic segmentation) is enough to separate the regions that
-are touching/connected. This is such a basic and simple form of
-segmentation that Gnuastro's Arithmetic program has an operator for it: see
-@code{connected-components} in @ref{Arithmetic operators}. Assuming the
-binary dataset is called @file{binary.fits}, you can use it with a command
-like this:
+If the targets are clearly separated, or their detected regions aren't 
touching, a simple connected 
components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
 algorithm (very basic segmentation) is enough to separate the regions that are 
touching/connected.
+This is such a basic and simple form of segmentation that Gnuastro's 
Arithmetic program has an operator for it: see @code{connected-components} in 
@ref{Arithmetic operators}.
+Assuming the binary dataset is called @file{binary.fits}, you can use it with 
a command like this:
 
 @example
 $ astarithmetic binary.fits 2 connected-components
@@ -14152,62 +13428,38 @@ like below:
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-However, in most astronomical situations our targets are not nicely
-separated or have a sharp boundary/edge (for a threshold to suffice): they
-touch (for example merging galaxies), or are simply in the same
-line-of-sight (which is much more common). This causes their images to
-overlap.
-
-In particular, when you do your detection with NoiseChisel, you will detect
-signal to very low surface brightness limits: deep into the faint wings of
-galaxies or bright stars (which can extend very far and irregularly from
-their center). Therefore, it often happens that several galaxies are
-detected as one large detection. Since they are touching, a simple
-connected components algorithm will not suffice. It is therefore necessary
-to do a more sophisticated segmentation and break up the detected pixels
-(even those that are touching) into multiple target objects as accurately
-as possible.
-
-Segment will use a detection map and its corresponding dataset to find
-sub-structure over the detected areas and use them for its
-segmentation. Until Gnuastro version 0.6 (released in 2018), Segment was
-part of @ref{NoiseChisel}. Therefore, similar to NoiseChisel, the best
-place to start reading about Segment and understanding what it does (with
-many illustrative figures) is Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+However, in most astronomical situations our targets are not nicely separated 
or have a sharp boundary/edge (for a threshold to suffice): they touch (for 
example merging galaxies), or are simply in the same line-of-sight (which is 
much more common).
+This causes their images to overlap.
+
+In particular, when you do your detection with NoiseChisel, you will detect 
signal to very low surface brightness limits: deep into the faint wings of 
galaxies or bright stars (which can extend very far and irregularly from their 
center).
+Therefore, it often happens that several galaxies are detected as one large 
detection.
+Since they are touching, a simple connected components algorithm will not 
suffice.
+It is therefore necessary to do a more sophisticated segmentation and break up 
the detected pixels (even those that are touching) into multiple target objects 
as accurately as possible.
+
+Segment will use a detection map and its corresponding dataset to find 
sub-structure over the detected areas and use them for its segmentation.
+Until Gnuastro version 0.6 (released in 2018), Segment was part of 
@ref{NoiseChisel}.
+Therefore, similar to NoiseChisel, the best place to start reading about 
Segment and understanding what it does (with many illustrative figures) is 
Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}.
 
 @cindex river
 @cindex Watershed algorithm
-As a summary, Segment first finds true @emph{clump}s over the
-detections. Clumps are associated with local maxima/minima@footnote{By
-default the maximum is used as the first clump pixel, to define clumps
-based on local minima, use the @option{--minima} option.} and extend over
-the neighboring pixels until they reach a local minimum/maximum
-(@emph{river}/@emph{watershed}). By default, Segment will use the
-distribution of clump signal-to-noise ratios over the undetected regions as
-reference to find ``true'' clumps over the detections. Using the undetected
-regions can be disabled by directly giving a signal-to-noise ratio to
-@option{--clumpsnthresh}.
-
-The true clumps are then grown to a certain threshold over the
-detections. Based on the strength of the connections (rivers/watersheds)
-between the grown clumps, they are considered parts of one @emph{object} or
-as separate @emph{object}s. See Section 3.2 of Akhlaghi and Ichikawa [2015]
-(link above) for more. Segment's main output are thus two labeled datasets:
-1) clumps, and 2) objects. See @ref{Segment output} for more.
-
-To start learning about Segment, especially in relation to detection
-(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended
-references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
-
-Those papers cannot be updated any more but the software will evolve. For
-example Segment became a separate program (from NoiseChisel) in 2018 (after
-those papers were published). Therefore this book is the definitive
-reference. To help in the transition from those papers to the software you
-are using, see @ref{Segment changes after publication}. Finally, in
-@ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs and
-configuration options.
+As a summary, Segment first finds true @emph{clump}s over the detections.
+Clumps are associated with local maxima/minima@footnote{By default the maximum 
is used as the first clump pixel, to define clumps based on local minima, use 
the @option{--minima} option.} and extend over the neighboring pixels until 
they reach a local minimum/maximum (@emph{river}/@emph{watershed}).
+By default, Segment will use the distribution of clump signal-to-noise ratios 
over the undetected regions as reference to find ``true'' clumps over the 
detections.
+Using the undetected regions can be disabled by directly giving a 
signal-to-noise ratio to @option{--clumpsnthresh}.
+
+The true clumps are then grown to a certain threshold over the detections.
+Based on the strength of the connections (rivers/watersheds) between the grown 
clumps, they are considered parts of one @emph{object} or as separate 
@emph{object}s.
+See Section 3.2 of Akhlaghi and Ichikawa [2015] (link above) for more.
+Segment's main output are thus two labeled datasets: 1) clumps, and 2) objects.
+See @ref{Segment output} for more.
+
+To start learning about Segment, especially in relation to detection 
(@ref{NoiseChisel}) and measurement (@ref{MakeCatalog}), the recommended 
references are @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1611.06387, Akhlaghi [2016]}.
+
+Those papers cannot be updated any more but the software will evolve.
+For example Segment became a separate program (from NoiseChisel) in 2018 
(after those papers were published).
+Therefore this book is the definitive reference.
+To help in the transition from those papers to the software you are using, see 
@ref{Segment changes after publication}.
+Finally, in @ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs 
and configuration options.
 
 
 @menu
@@ -14218,105 +13470,67 @@ configuration options.
 @node Segment changes after publication, Invoking astsegment, Segment, Segment
 @subsection Segment changes after publication
 
-Segment's main algorithm and working strategy were initially defined and
-introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664,
-Akhlaghi and Ichikawa [2015]}. Prior to Gnuastro version 0.6 (released
-2018), one program (NoiseChisel) was in charge of detection @emph{and}
-segmentation. To increase creativity and modularity, NoiseChisel's
-segmentation features were spun-off into a separate program (Segment). It
-is strongly recommended to read that paper for a good understanding of what
-Segment does, how it relates to detection, and how each parameter
-influences the output. That paper has a large number of figures showing
-every step on multiple mock and real examples.
-
-However, the paper cannot be updated anymore, but Segment has evolved (and
-will continue to do so): better algorithms or steps have been (and will be)
-found. This book is thus the final and definitive guide to Segment. The aim
-of this section is to make the transition from the paper to your installed
-version, as smooth as possible through the list below. For a more detailed
-list of changes in previous Gnuastro releases/versions, please follow the
-@file{NEWS} file@footnote{The @file{NEWS} file is present in the released
-Gnuastro tarball, see @ref{Release tarball}.}.
+Segment's main algorithm and working strategy were initially defined and 
introduced in Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi 
and Ichikawa [2015]}.
+Prior to Gnuastro version 0.6 (released 2018), one program (NoiseChisel) was 
in charge of detection @emph{and} segmentation.
+To increase creativity and modularity, NoiseChisel's segmentation features 
were spun-off into a separate program (Segment).
+It is strongly recommended to read that paper for a good understanding of what 
Segment does, how it relates to detection, and how each parameter influences 
the output.
+That paper has a large number of figures showing every step on multiple mock 
and real examples.
+
+However, the paper cannot be updated anymore, but Segment has evolved (and 
will continue to do so): better algorithms or steps have been (and will be) 
found.
+This book is thus the final and definitive guide to Segment.
+The aim of this section is to make the transition from the paper to your 
installed version, as smooth as possible through the list below.
+For a more detailed list of changes in previous Gnuastro releases/versions, 
please follow the @file{NEWS} file@footnote{The @file{NEWS} file is present in 
the released Gnuastro tarball, see @ref{Release tarball}.}.
 
 @itemize
 
 @item
-Since the spin-off from NoiseChisel, the default kernel to smooth the input
-for convolution has a FWHM of 1.5 pixels (still a Gaussian). This is
-slightly less than NoiseChisel's default kernel (which has a FWHM of 2
-pixels). This enables the better detection of sharp clumps: as the kernel
-gets wider, the lower signal-to-noise (but sharp/small) clumps will be
-washed away into the noise. You can use MakeProfiles to build your own
-kernel if this is too sharp/wide for your purpose. For more, see the
-@option{--kernel} option in @ref{Segment input}.
-
-The ability to use a different convolution kernel for detection and
-segmentation is one example of how separating detection from segmentation
-into separate programs can increase creativity. In detection, you want to
-detect the diffuse and extended emission, but in segmentation, you want to
-detect sharp peaks.
+Since the spin-off from NoiseChisel, the default kernel to smooth the input 
for convolution has a FWHM of 1.5 pixels (still a Gaussian).
+This is slightly less than NoiseChisel's default kernel (which has a FWHM of 2 
pixels).
+This enables the better detection of sharp clumps: as the kernel gets wider, 
the lower signal-to-noise (but sharp/small) clumps will be washed away into the 
noise.
+You can use MakeProfiles to build your own kernel if this is too sharp/wide 
for your purpose.
+For more, see the @option{--kernel} option in @ref{Segment input}.
+
+The ability to use a different convolution kernel for detection and 
segmentation is one example of how separating detection from segmentation into 
separate programs can increase creativity.
+In detection, you want to detect the diffuse and extended emission, but in 
segmentation, you want to detect sharp peaks.
 
 @item
-The criteria to select true from false clumps is the peak significance. It
-is defined to be the difference between the clump's peak value
-(@mymath{C_c}) and the highest valued river pixel around that clump
-(@mymath{R_c}). Both are calculated on the convolved image (signified by
-the @mymath{c} subscript). To avoid absolute values (differing from dataset
-to dataset), @mymath{C_c-R_c} is then divided by the Sky standard deviation
-under the river pixel used (@mymath{\sigma_r}) as shown below:
+The criteria to select true from false clumps is the peak significance.
+It is defined to be the difference between the clump's peak value 
(@mymath{C_c}) and the highest valued river pixel around that clump 
(@mymath{R_c}).
+Both are calculated on the convolved image (signified by the @mymath{c} 
subscript).
+To avoid absolute values (differing from dataset to dataset), @mymath{C_c-R_c} 
is then divided by the Sky standard deviation under the river pixel used 
(@mymath{\sigma_r}) as shown below:
 
 @dispmath{C_c-R_c\over \sigma_r}
 
-When @option{--minima} is given, the nominator becomes
-@mymath{R_c-C_c}.
-
-The input Sky standard deviation dataset (@option{--std}) is assumed to be
-for the unconvolved image. Therefore a constant factor (related to the
-convolution kernel) is necessary to convert this into an absolute peak
-significance@footnote{To get an estimate of the standard deviation
-correction factor between the input and convolved images, you can take the
-following steps: 1) Mask (set to NaN) all detections on the convolved image
-with the @code{where} operator or @ref{Arithmetic}. 2) Calculate the
-standard deviation of the undetected (non-masked) pixels of the convolved
-image with the @option{--sky} option of @ref{Statistics} (which also
-calculates the Sky standard deviation). Just make sure the tessellation
-settings of Statistics and NoiseChisel are the same (you can check with the
-@option{-P} option). 3) Divide the two standard deviation datasets to get
-the correction factor.}. As far as Segment is concerned, the absolute value
-of this correction factor is irrelevant: because it uses the ambient noise
-(undetected regions) to find the numerical threshold of this fraction and
-applies that over the detected regions.
-
-A distribution's extremum (maximum or minimum) values, used in the new
-criteria, are strongly affected by scatter. On the other hand, the
-convolved image has much less scatter@footnote{For more on the effect of
-convolution on a distribution, see Section 3.1.1 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}.}. Therefore @mymath{C_c-R_c} is a more reliable (with less
-scatter) measure to identify signal than @mymath{C-R} (on the unconvolved
-image).
-
-Initially, the total clump signal-to-noise ratio of each clump was used,
-see Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and
-Ichikawa [2015]}. Therefore its completeness decreased dramatically when
-clumps were present on gradients. In tests, this measure proved to be more
-successful in detecting clumps on gradients and on flatter regions
-simultaneously.
+When @option{--minima} is given, the nominator becomes @mymath{R_c-C_c}.
+
+The input Sky standard deviation dataset (@option{--std}) is assumed to be for 
the unconvolved image.
+Therefore a constant factor (related to the convolution kernel) is necessary 
to convert this into an absolute peak significance@footnote{To get an estimate 
of the standard deviation correction factor between the input and convolved 
images, you can take the following steps:
+1) Mask (set to NaN) all detections on the convolved image with the 
@code{where} operator or @ref{Arithmetic}.
+2) Calculate the standard deviation of the undetected (non-masked) pixels of 
the convolved image with the @option{--sky} option of @ref{Statistics} (which 
also calculates the Sky standard deviation).
+Just make sure the tessellation settings of Statistics and NoiseChisel are the 
same (you can check with the @option{-P} option).
+3) Divide the two standard deviation datasets to get the correction factor.}.
+As far as Segment is concerned, the absolute value of this correction factor 
is irrelevant: because it uses the ambient noise (undetected regions) to find 
the numerical threshold of this fraction and applies that over the detected 
regions.
+
+A distribution's extremum (maximum or minimum) values, used in the new 
criteria, are strongly affected by scatter.
+On the other hand, the convolved image has much less scatter@footnote{For more 
on the effect of convolution on a distribution, see Section 3.1.1 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+Therefore @mymath{C_c-R_c} is a more reliable (with less scatter) measure to 
identify signal than @mymath{C-R} (on the unconvolved image).
+
+Initially, the total clump signal-to-noise ratio of each clump was used, see 
Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}.
+Therefore its completeness decreased dramatically when clumps were present on 
gradients.
+In tests, this measure proved to be more successful in detecting clumps on 
gradients and on flatter regions simultaneously.
 
 @item
-With the new @option{--minima} option, it is now possible to detect inverse
-clumps (for example absorption features). In such cases, the clump should
-be built from its smallest value.
+With the new @option{--minima} option, it is now possible to detect inverse 
clumps (for example absorption features).
+In such cases, the clump should be built from its smallest value.
 @end itemize
 
 
 @node Invoking astsegment,  , Segment changes after publication, Segment
 @subsection Invoking Segment
 
-Segment will identify substructure within the detected regions of an input
-image. Segment's output labels can be directly used for measurements (for
-example with @ref{MakeCatalog}). The executable name is @file{astsegment}
-with the following general template
+Segment will identify substructure within the detected regions of an input 
image.
+Segment's output labels can be directly used for measurements (for example 
with @ref{MakeCatalog}).
+The executable name is @file{astsegment} with the following general template
 
 @example
 $ astsegment [OPTION ...] InputImage.fits
@@ -14345,30 +13559,18 @@ $ astsegment in.fits --std=0.01 --detection=all 
--clumpsnthresh=10
 
 @cindex Gaussian
 @noindent
-If Segment is to do processing (for example you don't want to get help, or
-see the values of each option), at least one input dataset is necessary
-along with detection and error information, either as separate datasets
-(per-pixel) or fixed values, see @ref{Segment input}. Segment shares a
-large set of common operations with other Gnuastro programs, mainly
-regarding input/output, general processing steps, and general operating
-modes. To help in a unified experience between all of Gnuastro's programs,
-these common operations have the same names and defined in @ref{Common
-options}.
+If Segment is to do processing (for example you don't want to get help, or see 
the values of each option), at least one input dataset is necessary along with 
detection and error information, either as separate datasets (per-pixel) or 
fixed values, see @ref{Segment input}.
+Segment shares a large set of common operations with other Gnuastro programs, 
mainly regarding input/output, general processing steps, and general operating 
modes.
+To help in a unified experience between all of Gnuastro's programs, these 
common operations have the same names and defined in @ref{Common options}.
+
+As in all Gnuastro programs, options can also be given to Segment in 
configuration files.
+For a thorough description of Gnuastro's configuration file parsing, please 
see @ref{Configuration files}.
+All of Segment's options with a short description are also always available on 
the command-line with the @option{--help} option, see @ref{Getting help}.
+To inspect the option values without actually running Segment, append your 
command with @option{--printparams} (or @option{-P}).
 
-As in all Gnuastro programs, options can also be given to Segment in
-configuration files. For a thorough description of Gnuastro's configuration
-file parsing, please see @ref{Configuration files}. All of Segment's
-options with a short description are also always available on the
-command-line with the @option{--help} option, see @ref{Getting help}. To
-inspect the option values without actually running Segment, append your
-command with @option{--printparams} (or @option{-P}).
-
-To help in easy navigation between Segment's options, they are separately
-discussed in the three sub-sections below: @ref{Segment input} discusses
-how you can customize the inputs to Segment. @ref{Segmentation options} is
-devoted to options specific to the high-level segmentation
-process. Finally, in @ref{Segment output}, we'll discuss options that
-affect Segment's output.
+To help in easy navigation between Segment's options, they are separately 
discussed in the three sub-sections below: @ref{Segment input} discusses how 
you can customize the inputs to Segment.
+@ref{Segmentation options} is devoted to options specific to the high-level 
segmentation process.
+Finally, in @ref{Segment output}, we'll discuss options that affect Segment's 
output.
 
 @menu
 * Segment input::               Input files and options.
@@ -14379,207 +13581,141 @@ affect Segment's output.
 @node Segment input, Segmentation options, Invoking astsegment, Invoking 
astsegment
 @subsubsection Segment input
 
-Besides the input dataset (for example astronomical image), Segment also
-needs to know the Sky standard deviation and the regions of the dataset
-that it should segment. The values dataset is assumed to be Sky subtracted
-by default. If it isn't, you can ask Segment to subtract the Sky internally
-by calling @option{--sky}. For the rest of this discussion, we'll assume it
-is already sky subtracted.
-
-The Sky and its standard deviation can be a single value (to be used for
-the whole dataset) or a separate dataset (for a separate value per
-pixel). If a dataset is used for the Sky and its standard deviation, they
-must either be the size of the input image, or have a single value per tile
-(generated with @option{--oneelempertile}, see @ref{Processing options} and
-@ref{Tessellation}).
-
-The detected regions/pixels can be specified as a detection map (for
-example see @ref{NoiseChisel output}). If @option{--detection=all}, Segment
-won't read any detection map and assume the whole input is a single
-detection. For example when the dataset is fully covered by a large nearby
-galaxy/globular cluster.
-
-When dataset are to be used for any of the inputs, Segment will assume they
-are multiple extensions of a single file by default (when @option{--std} or
-@option{--detection} aren't called). For example NoiseChisel's default
-output @ref{NoiseChisel output}. When the Sky-subtracted values are in one
-file, and the detection and Sky standard deviation are in another, you just
-need to use @option{--detection}: in the absence of @option{--std}, Segment
-will look for both the detection labels and Sky standard deviation in the
-file given to @option{--detection}. Ultimately, if all three are in
-separate files, you need to call both @option{--detection} and
-@option{--std}.
-
-The extensions of the three mandatory inputs can be specified with
-@option{--hdu}, @option{--dhdu}, and @option{--stdhdu}. For a full
-discussion on what to give to these options, see the description of
-@option{--hdu} in @ref{Input output options}. To see their default values
-(along with all the other options), run Segment with the
-@option{--printparams} (or @option{-P}) option. Just recall that in the
-absence of @option{--detection} and @option{--std}, all three are assumed
-to be in the same file. If you only want to see Segment's default values
-for HDUs on your system, run this command:
+Besides the input dataset (for example astronomical image), Segment also needs 
to know the Sky standard deviation and the regions of the dataset that it 
should segment.
+The values dataset is assumed to be Sky subtracted by default.
+If it isn't, you can ask Segment to subtract the Sky internally by calling 
@option{--sky}.
+For the rest of this discussion, we'll assume it is already sky subtracted.
+
+The Sky and its standard deviation can be a single value (to be used for the 
whole dataset) or a separate dataset (for a separate value per pixel).
+If a dataset is used for the Sky and its standard deviation, they must either 
be the size of the input image, or have a single value per tile (generated with 
@option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).
+
+The detected regions/pixels can be specified as a detection map (for example 
see @ref{NoiseChisel output}).
+If @option{--detection=all}, Segment won't read any detection map and assume 
the whole input is a single detection.
+For example when the dataset is fully covered by a large nearby 
galaxy/globular cluster.
+
+When dataset are to be used for any of the inputs, Segment will assume they 
are multiple extensions of a single file by default (when @option{--std} or 
@option{--detection} aren't called).
+For example NoiseChisel's default output @ref{NoiseChisel output}.
+When the Sky-subtracted values are in one file, and the detection and Sky 
standard deviation are in another, you just need to use @option{--detection}: 
in the absence of @option{--std}, Segment will look for both the detection 
labels and Sky standard deviation in the file given to @option{--detection}.
+Ultimately, if all three are in separate files, you need to call both 
@option{--detection} and @option{--std}.
+
+The extensions of the three mandatory inputs can be specified with 
@option{--hdu}, @option{--dhdu}, and @option{--stdhdu}.
+For a full discussion on what to give to these options, see the description of 
@option{--hdu} in @ref{Input output options}.
+To see their default values (along with all the other options), run Segment 
with the @option{--printparams} (or @option{-P}) option.
+Just recall that in the absence of @option{--detection} and @option{--std}, 
all three are assumed to be in the same file.
+If you only want to see Segment's default values for HDUs on your system, run 
this command:
 
 @example
 $ astsegment -P | grep hdu
 @end example
 
-By default Segment will convolve the input with a kernel to improve the
-signal-to-noise ratio of true peaks. If you already have the convolved
-input dataset, you can pass it directly to Segment for faster processing
-(using the @option{--convolved} and @option{--chdu} options). Just don't
-forget that the convolved image must also be Sky-subtracted before calling
-Segment. If a value/file is given to @option{--sky}, the convolved values
-will also be Sky subtracted internally. Alternatively, if you prefer to
-give a kernel (with @option{--kernel} and @option{--khdu}), Segment can do
-the convolution internally. To disable convolution, use
-@option{--kernel=none}.
+By default Segment will convolve the input with a kernel to improve the 
signal-to-noise ratio of true peaks.
+If you already have the convolved input dataset, you can pass it directly to 
Segment for faster processing (using the @option{--convolved} and 
@option{--chdu} options).
+Just don't forget that the convolved image must also be Sky-subtracted before 
calling Segment.
+If a value/file is given to @option{--sky}, the convolved values will also be 
Sky subtracted internally.
+Alternatively, if you prefer to give a kernel (with @option{--kernel} and 
@option{--khdu}), Segment can do the convolution internally.
+To disable convolution, use @option{--kernel=none}.
 
 @table @option
 
 @item --sky=STR/FLT
-The Sky value(s) to subtract from the input. This option can either be
-given a constant number or a file name containing a dataset (multiple
-values, per pixel or per tile). By default, Segment will assume the input
-dataset is Sky subtracted, so this option is not mandatory.
+The Sky value(s) to subtract from the input.
+This option can either be given a constant number or a file name containing a 
dataset (multiple values, per pixel or per tile).
+By default, Segment will assume the input dataset is Sky subtracted, so this 
option is not mandatory.
 
-If the value can't be read as a number, it is assumed to be a file
-name. When the value is a file, the extension can be specified with
-@option{--skyhdu}. When its not a single number, the given dataset must
-either have the same size as the output or the same size as the
-tessellation (so there is one pixel per tile, see @ref{Tessellation}).
+If the value can't be read as a number, it is assumed to be a file name.
+When the value is a file, the extension can be specified with 
@option{--skyhdu}.
+When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
 
-When this option is given, its value(s) will be subtracted from the input
-and the (optional) convolved dataset (given to @option{--convolved}) prior
-to starting the segmentation process.
+When this option is given, its value(s) will be subtracted from the input and 
the (optional) convolved dataset (given to @option{--convolved}) prior to 
starting the segmentation process.
 
 @item --skyhdu=STR/INT
-The HDU/extension containing the Sky values. This is mandatory when the
-value given to @option{--sky} is not a number. Please see the description
-of @option{--hdu} in @ref{Input output options} for the different ways you
-can identify a special extension.
+The HDU/extension containing the Sky values.
+This is mandatory when the value given to @option{--sky} is not a number.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item --std=STR/FLT
-The Sky standard deviation value(s) corresponding to the input. The value
-can either be a constant number or a file name containing a dataset
-(multiple values, per pixel or per tile). The Sky standard deviation is
-mandatory for Segment to operate.
-
-If the value can't be read as a number, it is assumed to be a file
-name. When the value is a file, the extension can be specified with
-@option{--skyhdu}. When its not a single number, the given dataset must
-either have the same size as the output or the same size as the
-tessellation (so there is one pixel per tile, see @ref{Tessellation}).
-
-When this option is not called, Segment will assume the standard deviation
-is a dataset and in a HDU/extension (@option{--stdhdu}) of another one of
-the input file(s). If a file is given to @option{--detection}, it will
-assume that file contains the standard deviation dataset, otherwise, it
-will look into input filename (the main argument, without any option).
+The Sky standard deviation value(s) corresponding to the input.
+The value can either be a constant number or a file name containing a dataset 
(multiple values, per pixel or per tile).
+The Sky standard deviation is mandatory for Segment to operate.
+
+If the value can't be read as a number, it is assumed to be a file name.
+When the value is a file, the extension can be specified with 
@option{--skyhdu}.
+When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
+
+When this option is not called, Segment will assume the standard deviation is 
a dataset and in a HDU/extension (@option{--stdhdu}) of another one of the 
input file(s).
+If a file is given to @option{--detection}, it will assume that file contains 
the standard deviation dataset, otherwise, it will look into input filename 
(the main argument, without any option).
 
 @item --stdhdu=INT/STR
-The HDU/extension containing the Sky standard deviation values, when the
-value given to @option{--std} is a file name. Please see the description of
-@option{--hdu} in @ref{Input output options} for the different ways you can
-identify a special extension.
+The HDU/extension containing the Sky standard deviation values, when the value 
given to @option{--std} is a file name.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item --variance
-The input Sky standard deviation value/dataset is actually variance. When
-this option is called, the square root of input Sky standard deviation (see
-@option{--std}) is used internally, not its raw value(s).
+The input Sky standard deviation value/dataset is actually variance.
+When this option is called, the square root of input Sky standard deviation 
(see @option{--std}) is used internally, not its raw value(s).
 
 @item -d STR
 @itemx --detection=STR
-Detection map to use for segmentation. If given a value of @option{all},
-Segment will assume the whole dataset must be segmented, see below. If a
-detection map is given, the extension can be specified with
-@option{--dhdu}. If not given, Segment will assume the desired
-HDU/extension is in the main input argument (input file specified with no
-option).
-
-The final segmentation (clumps or objects) will only be over the non-zero
-pixels of this detection map. The dataset must have the same size as the
-input image. Only datasets with an integer type are acceptable for the
-labeled image, see @ref{Numeric data types}. If your detection map only has
-integer values, but it is stored in a floating point container, you can use
-Gnuastro's Arithmetic program (see @ref{Arithmetic}) to convert it to an
-integer container, like the example below:
+Detection map to use for segmentation.
+If given a value of @option{all}, Segment will assume the whole dataset must 
be segmented, see below.
+If a detection map is given, the extension can be specified with 
@option{--dhdu}.
+If not given, Segment will assume the desired HDU/extension is in the main 
input argument (input file specified with no option).
+
+The final segmentation (clumps or objects) will only be over the non-zero 
pixels of this detection map.
+The dataset must have the same size as the input image.
+Only datasets with an integer type are acceptable for the labeled image, see 
@ref{Numeric data types}.
+If your detection map only has integer values, but it is stored in a floating 
point container, you can use Gnuastro's Arithmetic program (see 
@ref{Arithmetic}) to convert it to an integer container, like the example below:
 
 @example
 $ astarithmetic float.fits int32 --output=int.fits
 @end example
 
-It may happen that the whole input dataset is covered by signal, for
-example when working on parts of the Andromeda galaxy, or nearby globular
-clusters (that cover the whole field of view). In such cases, segmentation
-is necessary over the complete dataset, not just specific regions
-(detections). By default Segment will first use the undetected regions as a
-reference to find the proper signal-to-noise ratio of ``true'' clumps (give
-a purity level specified with @option{--snquant}). Therefore, in such
-scenarios you also need to manually give a ``true'' clump signal-to-noise
-ratio with the @option{--clumpsnthresh} option to disable looking into the
-undetected regions, see @ref{Segmentation options}. In such cases, is
-possible to make a detection map that only has the value @code{1} for all
-pixels (for example using @ref{Arithmetic}), but for convenience, you can
-also use @option{--detection=all}.
+It may happen that the whole input dataset is covered by signal, for example 
when working on parts of the Andromeda galaxy, or nearby globular clusters 
(that cover the whole field of view).
+In such cases, segmentation is necessary over the complete dataset, not just 
specific regions (detections).
+By default Segment will first use the undetected regions as a reference to 
find the proper signal-to-noise ratio of ``true'' clumps (give a purity level 
specified with @option{--snquant}).
+Therefore, in such scenarios you also need to manually give a ``true'' clump 
signal-to-noise ratio with the @option{--clumpsnthresh} option to disable 
looking into the undetected regions, see @ref{Segmentation options}.
+In such cases, is possible to make a detection map that only has the value 
@code{1} for all pixels (for example using @ref{Arithmetic}), but for 
convenience, you can also use @option{--detection=all}.
 
 @item --dhdu
-The HDU/extension containing the detection map given to
-@option{--detection}. Please see the description of @option{--hdu} in
-@ref{Input output options} for the different ways you can identify a
-special extension.
+The HDU/extension containing the detection map given to @option{--detection}.
+Please see the description of @option{--hdu} in @ref{Input output options} for 
the different ways you can identify a special extension.
 
 @item -k STR
 @itemx --kernel=STR
-The kernel used to convolve the input image. The usage of this option is
-identical to NoiseChisel's @option{--kernel} option (@ref{NoiseChisel
-input}). Please see the descriptions there for more. To disable
-convolution, you can give it a value of @option{none}.
+The kernel used to convolve the input image.
+The usage of this option is identical to NoiseChisel's @option{--kernel} 
option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
+To disable convolution, you can give it a value of @option{none}.
 
 @item --khdu
-The HDU/extension containing the kernel used for convolution. For
-acceptable values, please see the description of @option{--hdu} in
-@ref{Input output options}.
+The HDU/extension containing the kernel used for convolution.
+For acceptable values, please see the description of @option{--hdu} in 
@ref{Input output options}.
 
 @item --convolved
-The convolved image to avoid internal convolution by Segment. The usage of
-this option is identical to NoiseChisel's @option{--convolved}
-option. Please see @ref{NoiseChisel input} for a thorough discussion of the
-usefulness and best practices of using this option.
-
-If you want to use the same convolution kernel for detection (with
-@ref{NoiseChisel}) and segmentation, with this option, you can use the same
-convolved image (that is also available in NoiseChisel) and avoid two
-convolutions. However, just be careful to use the input to NoiseChisel as
-the input to Segment also, then use the @option{--sky} and @option{--std}
-to specify the Sky and its standard deviation (from NoiseChisel's
-output). Recall that when NoiseChisel is not called with
-@option{--rawoutput}, the first extension of NoiseChisel's output is the
-@emph{Sky-subtracted} input (see @ref{NoiseChisel output}). So if you use
-the same convolved image that you fed to NoiseChisel, but use NoiseChisel's
-output with Segment's @option{--convolved}, then the convolved image won't
-be Sky subtracted.
+The convolved image to avoid internal convolution by Segment.
+The usage of this option is identical to NoiseChisel's @option{--convolved} 
option.
+Please see @ref{NoiseChisel input} for a thorough discussion of the usefulness 
and best practices of using this option.
+
+If you want to use the same convolution kernel for detection (with 
@ref{NoiseChisel}) and segmentation, with this option, you can use the same 
convolved image (that is also available in NoiseChisel) and avoid two 
convolutions.
+However, just be careful to use the input to NoiseChisel as the input to 
Segment also, then use the @option{--sky} and @option{--std} to specify the Sky 
and its standard deviation (from NoiseChisel's output).
+Recall that when NoiseChisel is not called with @option{--rawoutput}, the 
first extension of NoiseChisel's output is the @emph{Sky-subtracted} input (see 
@ref{NoiseChisel output}).
+So if you use the same convolved image that you fed to NoiseChisel, but use 
NoiseChisel's output with Segment's @option{--convolved}, then the convolved 
image won't be Sky subtracted.
 
 @item --chdu
-The HDU/extension containing the convolved image (given to
-@option{--convolved}). For acceptable values, please see the description of
-@option{--hdu} in @ref{Input output options}.
+The HDU/extension containing the convolved image (given to 
@option{--convolved}).
+For acceptable values, please see the description of @option{--hdu} in 
@ref{Input output options}.
 
 @item -L INT[,INT]
 @itemx --largetilesize=INT[,INT]
-The size of the large tiles to use for identifying the clump S/N threshold
-over the undetected regions. The usage of this option is identical to
-NoiseChisel's @option{--largetilesize} option (@ref{NoiseChisel
-input}). Please see the descriptions there for more.
-
-The undetected regions can be a significant fraction of the dataset and
-finding clumps requires sorting of the desired regions, which can be
-slow. To speed up the processing, Segment finds clumps in the undetected
-regions over separate large tiles. This allows it to have to sort a much
-smaller set of pixels and also to treat them independently and in
-parallel. Both these issues greatly speed it up. Just be sure to not
-decrease the large tile sizes too much (less than 100 pixels in each
-dimension). It is important for them to be much larger than the clumps.
+The size of the large tiles to use for identifying the clump S/N threshold 
over the undetected regions.
+The usage of this option is identical to NoiseChisel's 
@option{--largetilesize} option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
+
+The undetected regions can be a significant fraction of the dataset and 
finding clumps requires sorting of the desired regions, which can be slow.
+To speed up the processing, Segment finds clumps in the undetected regions 
over separate large tiles.
+This allows it to have to sort a much smaller set of pixels and also to treat 
them independently and in parallel.
+Both these issues greatly speed it up.
+Just be sure to not decrease the large tile sizes too much (less than 100 
pixels in each dimension).
+It is important for them to be much larger than the clumps.
 
 @end table
 
@@ -14587,182 +13723,128 @@ dimension). It is important for them to be much 
larger than the clumps.
 @node Segmentation options, Segment output, Segment input, Invoking astsegment
 @subsubsection Segmentation options
 
-The options below can be used to configure every step of the segmentation
-process in the Segment program. For a more complete explanation (with
-figures to demonstrate each step), please see Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}, and
-also @ref{Segment}. By default, Segment will follow the procedure described
-in the paper to find the S/N threshold based on the noise properties. This
-can be disabled by directly giving a trustable signal-to-noise ratio to the
-@option{--clumpsnthresh} option.
+The options below can be used to configure every step of the segmentation 
process in the Segment program.
+For a more complete explanation (with figures to demonstrate each step), 
please see Section 3.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa [2015]}, and also @ref{Segment}.
+By default, Segment will follow the procedure described in the paper to find 
the S/N threshold based on the noise properties.
+This can be disabled by directly giving a trustable signal-to-noise ratio to 
the @option{--clumpsnthresh} option.
 
-Recall that you can always see the full list of Gnuastro's options with the
-@option{--help} (see @ref{Getting help}), or @option{--printparams} (or
-@option{-P}) to see their values (see @ref{Operating mode options}).
+Recall that you can always see the full list of Gnuastro's options with the 
@option{--help} (see @ref{Getting help}), or @option{--printparams} (or 
@option{-P}) to see their values (see @ref{Operating mode options}).
 
 @table @option
 
 @item -B FLT
 @itemx --minskyfrac=FLT
-Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a
-large tile. Only (large) tiles with a fraction of undetected pixels (Sky)
-greater than this value will be used for finding clumps. The clumps found
-in the undetected areas will be used to estimate a S/N threshold for true
-clumps. Therefore this is an important option (to decrease) in crowded
-fields. Operationally, this is almost identical to NoiseChisel's
-@option{--minskyfrac} option (@ref{Detection options}). Please see the
-descriptions there for more.
+Minimum fraction (value between 0 and 1) of Sky (undetected) areas in a large 
tile.
+Only (large) tiles with a fraction of undetected pixels (Sky) greater than 
this value will be used for finding clumps.
+The clumps found in the undetected areas will be used to estimate a S/N 
threshold for true clumps.
+Therefore this is an important option (to decrease) in crowded fields.
+Operationally, this is almost identical to NoiseChisel's @option{--minskyfrac} 
option (@ref{Detection options}).
+Please see the descriptions there for more.
 
 @item --minima
-Build the clumps based on the local minima, not maxima. By default, clumps
-are built starting from local maxima (see Figure 8 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}). Therefore, this option can be useful when you are searching for
-true local minima (for example absorption features).
+Build the clumps based on the local minima, not maxima.
+By default, clumps are built starting from local maxima (see Figure 8 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}).
+Therefore, this option can be useful when you are searching for true local 
minima (for example absorption features).
 
 @item -m INT
 @itemx --snminarea=INT
-The minimum area which a clump in the undetected regions should have in
-order to be considered in the clump Signal to noise ratio measurement. If
-this size is set to a small value, the Signal to noise ratio of false
-clumps will not be accurately found. It is recommended that this value be
-larger than the value to NoiseChisel's @option{--snminarea}. Because the
-clumps are found on the convolved (smoothed) image while the
-pseudo-detections are found on the input image. You can use
-@option{--checksn} and @option{--checksegmentation} to see if your chosen
-value is reasonable or not.
+The minimum area which a clump in the undetected regions should have in order 
to be considered in the clump Signal to noise ratio measurement.
+If this size is set to a small value, the Signal to noise ratio of false 
clumps will not be accurately found.
+It is recommended that this value be larger than the value to NoiseChisel's 
@option{--snminarea}.
+Because the clumps are found on the convolved (smoothed) image while the 
pseudo-detections are found on the input image.
+You can use @option{--checksn} and @option{--checksegmentation} to see if your 
chosen value is reasonable or not.
 
 @item --checksn
-Save the S/N values of the clumps over the sky and detected regions into
-separate tables. If @option{--tableformat} is a FITS format, each table will
-be written into a separate extension of one file suffixed with
-@file{_clumpsn.fits}. If it is plain text, a separate file will be made for
-each table (ending in @file{_clumpsn_sky.txt} and
-@file{_clumpsn_det.txt}). For more on @option{--tableformat} see @ref{Input
-output options}.
-
-You can use these tables to inspect the S/N values and their distribution
-(in combination with the @option{--checksegmentation} option to see where
-the clumps are). You can use Gnuastro's @ref{Statistics} to make a
-histogram of the distribution (ready for plotting in a text file, or a
-crude ASCII-art demonstration on the command-line).
-
-With this option, Segment will abort as soon as the two tables are
-created. This allows you to inspect the steps leading to the final S/N
-quantile threshold, this behavior can be disabled with
-@option{--continueaftercheck}.
+Save the S/N values of the clumps over the sky and detected regions into 
separate tables.
+If @option{--tableformat} is a FITS format, each table will be written into a 
separate extension of one file suffixed with @file{_clumpsn.fits}.
+If it is plain text, a separate file will be made for each table (ending in 
@file{_clumpsn_sky.txt} and @file{_clumpsn_det.txt}).
+For more on @option{--tableformat} see @ref{Input output options}.
+
+You can use these tables to inspect the S/N values and their distribution (in 
combination with the @option{--checksegmentation} option to see where the 
clumps are).
+You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution (ready for plotting in a text file, or a crude ASCII-art 
demonstration on the command-line).
+
+With this option, Segment will abort as soon as the two tables are created.
+This allows you to inspect the steps leading to the final S/N quantile 
threshold, this behavior can be disabled with @option{--continueaftercheck}.
 
 @item --minnumfalse=INT
-The minimum number of clumps over undetected (Sky) regions to identify the
-requested Signal-to-Noise ratio threshold. Operationally, this is almost
-identical to NoiseChisel's @option{--minnumfalse} option (@ref{Detection
-options}). Please see the descriptions there for more.
+The minimum number of clumps over undetected (Sky) regions to identify the 
requested Signal-to-Noise ratio threshold.
+Operationally, this is almost identical to NoiseChisel's 
@option{--minnumfalse} option (@ref{Detection options}).
+Please see the descriptions there for more.
 
 @item -c FLT
 @itemx --snquant=FLT
-The quantile of the signal-to-noise ratio distribution of clumps in
-undetected regions, used to define true clumps. After identifying all the
-usable clumps in the undetected regions of the dataset, the given quantile
-of their signal-to-noise ratios is used to define the signal-to-noise ratio
-of a ``true'' clump. Effectively, this can be seen as an inverse p-value
-measure. See Figure 9 and Section 3.2.1 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} for a
-complete explanation. The full distribution of clump signal-to-noise ratios
-over the undetected areas can be saved into a table with @option{--checksn}
-option and visually inspected with @option{--checksegmentation}.
+The quantile of the signal-to-noise ratio distribution of clumps in undetected 
regions, used to define true clumps.
+After identifying all the usable clumps in the undetected regions of the 
dataset, the given quantile of their signal-to-noise ratios is used to define 
the signal-to-noise ratio of a ``true'' clump.
+Effectively, this can be seen as an inverse p-value measure.
+See Figure 9 and Section 3.2.1 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]} for a complete explanation.
+The full distribution of clump signal-to-noise ratios over the undetected 
areas can be saved into a table with @option{--checksn} option and visually 
inspected with @option{--checksegmentation}.
 
 @item -v
 @itemx --keepmaxnearriver
-Keep a clump whose maximum (minimum if @option{--minima} is called) flux is
-8-connected to a river pixel. By default such clumps over detections are
-considered to be noise and are removed irrespective of their brightness
-(see @ref{Flux Brightness and magnitude}). Over large profiles, that sink
-into the noise very slowly, noise can cause part of the profile (which was
-flat without noise) to become a very large and with a very high Signal to
-noise ratio. In such cases, the pixel with the maximum flux in the clump
-will be immediately touching a river pixel.
+Keep a clump whose maximum (minimum if @option{--minima} is called) flux is 
8-connected to a river pixel.
+By default such clumps over detections are considered to be noise and are 
removed irrespective of their brightness (see @ref{Flux Brightness and 
magnitude}).
+Over large profiles, that sink into the noise very slowly, noise can cause 
part of the profile (which was flat without noise) to become a very large and 
with a very high Signal to noise ratio.
+In such cases, the pixel with the maximum flux in the clump will be 
immediately touching a river pixel.
 
 @item -s FLT
 @itemx --clumpsnthresh=FLT
-The signal-to-noise threshold for true clumps. If this option is given,
-then the segmentation options above will be ignored and the given value
-will be directly used to identify true clumps over the detections. This can
-be useful if you have a large dataset with similar noise properties. You
-can find a robust signal-to-noise ratio based on a (sufficiently large)
-smaller portion of the dataset. Afterwards, with this option, you can speed
-up the processing on the whole dataset. Other scenarios where this option
-may be useful is when, the image might not contain enough/any Sky regions.
+The signal-to-noise threshold for true clumps.
+If this option is given, then the segmentation options above will be ignored 
and the given value will be directly used to identify true clumps over the 
detections.
+This can be useful if you have a large dataset with similar noise properties.
+You can find a robust signal-to-noise ratio based on a (sufficiently large) 
smaller portion of the dataset.
+Afterwards, with this option, you can speed up the processing on the whole 
dataset.
+Other scenarios where this option may be useful is when, the image might not 
contain enough/any Sky regions.
 
 @item -G FLT
 @itemx --gthresh=FLT
-Threshold (multiple of the sky standard deviation added with the sky) to
-stop growing true clumps. Once true clumps are found, they are set as the
-basis to segment the detected region. They are grown until the threshold
-specified by this option.
+Threshold (multiple of the sky standard deviation added with the sky) to stop 
growing true clumps.
+Once true clumps are found, they are set as the basis to segment the detected 
region.
+They are grown until the threshold specified by this option.
 
 @item -y INT
 @itemx --minriverlength=INT
-The minimum length of a river between two grown clumps for it to be
-considered in signal-to-noise ratio estimations. Similar to
-@option{--snminarea}, if the length of the river is too short, the
-signal-to-noise ratio can be noisy and unreliable. Any existing rivers
-shorter than this length will be considered as non-existent, independent of
-their Signal to noise ratio. The clumps are grown on the input image,
-therefore this value can be smaller than the value given to
-@option{--snminarea}. Recall that the clumps were defined on the convolved
-image so @option{--snminarea} should be larger.
+The minimum length of a river between two grown clumps for it to be considered 
in signal-to-noise ratio estimations.
+Similar to @option{--snminarea}, if the length of the river is too short, the 
signal-to-noise ratio can be noisy and unreliable.
+Any existing rivers shorter than this length will be considered as 
non-existent, independent of their Signal to noise ratio.
+The clumps are grown on the input image, therefore this value can be smaller 
than the value given to @option{--snminarea}.
+Recall that the clumps were defined on the convolved image so 
@option{--snminarea} should be larger.
 
 @item -O FLT
 @itemx --objbordersn=FLT
-The maximum Signal to noise ratio of the rivers between two grown clumps in
-order to consider them as separate `objects'. If the Signal to noise ratio
-of the river between two grown clumps is larger than this value, they are
-defined to be part of one `object'. Note that the physical reality of these
-`objects' can never be established with one image, or even multiple images
-from one broad-band filter. Any method we devise to define `object's over a
-detected region is ultimately subjective.
-
-Two very distant galaxies or satellites in one halo might lie in the
-same line of sight and be detected as clumps on one detection. On the
-other hand, the connection (through a spiral arm or tidal tail for
-example) between two parts of one galaxy might have such a low surface
-brightness that they are broken up into multiple detections or
-objects. In fact if you have noticed, exactly for this purpose, this is
-the only Signal to noise ratio that the user gives into
-NoiseChisel. The `true' detections and clumps can be objectively
-identified from the noise characteristics of the image, so you don't
-have to give any hand input Signal to noise ratio.
+The maximum Signal to noise ratio of the rivers between two grown clumps in 
order to consider them as separate `objects'.
+If the Signal to noise ratio of the river between two grown clumps is larger 
than this value, they are defined to be part of one `object'.
+Note that the physical reality of these `objects' can never be established 
with one image, or even multiple images from one broad-band filter.
+Any method we devise to define `object's over a detected region is ultimately 
subjective.
+
+Two very distant galaxies or satellites in one halo might lie in the same line 
of sight and be detected as clumps on one detection.
+On the other hand, the connection (through a spiral arm or tidal tail for 
example) between two parts of one galaxy might have such a low surface 
brightness that they are broken up into multiple detections or objects.
+In fact if you have noticed, exactly for this purpose, this is the only Signal 
to noise ratio that the user gives into NoiseChisel.
+The `true' detections and clumps can be objectively identified from the noise 
characteristics of the image, so you don't have to give any hand input Signal 
to noise ratio.
 
 @item --checksegmentation
-A file with the suffix @file{_seg.fits} will be created. This file keeps
-all the relevant steps in finding true clumps and segmenting the detections
-into multiple objects in various extensions. Having read the paper or the
-steps above. Examining this file can be an excellent guide in choosing the
-best set of parameters. Note that calling this function will significantly
-slow NoiseChisel. In verbose mode (without the @option{--quiet} option, see
-@ref{Operating mode options}) the important steps (along with their
-extension names) will also be reported.
-
-With this option, NoiseChisel will abort as soon as the two tables are
-created. This behavior can be disabled with @option{--continueaftercheck}.
+A file with the suffix @file{_seg.fits} will be created.
+This file keeps all the relevant steps in finding true clumps and segmenting 
the detections into multiple objects in various extensions.
+Having read the paper or the steps above.
+Examining this file can be an excellent guide in choosing the best set of 
parameters.
+Note that calling this function will significantly slow NoiseChisel.
+In verbose mode (without the @option{--quiet} option, see @ref{Operating mode 
options}) the important steps (along with their extension names) will also be 
reported.
+
+With this option, NoiseChisel will abort as soon as the two tables are created.
+This behavior can be disabled with @option{--continueaftercheck}.
 
 @end table
 
 @node Segment output,  , Segmentation options, Invoking astsegment
 @subsubsection Segment output
 
-The main output of Segment are two label datasets (with integer types,
-separating the dataset's elements into different classes). They have
-HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.
+The main output of Segment are two label datasets (with integer types, 
separating the dataset's elements into different classes).
+They have HDU/extension names of @code{CLUMPS} and @code{OBJECTS}.
 
-Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the
-main output file only contains header keywords and image or table. It
-contains the Segment input files and parameters (option names and values)
-as FITS keywords. Note that if an option name is longer than 8 characters,
-the keyword name is the second word. The first word is
-@code{HIERARCH}. Also note that according to the FITS standard, the keyword
-names must be in capital letters, therefore, if you want to use Grep to
-inspect these keywords, use the @option{-i} option, like the example below.
+Similar to all Gnuastro's FITS outputs, the zero-th extension/HDU of the main 
output file only contains header keywords and image or table.
+It contains the Segment input files and parameters (option names and values) 
as FITS keywords.
+Note that if an option name is longer than 8 characters, the keyword name is 
the second word.
+The first word is @code{HIERARCH}.
+Also note that according to the FITS standard, the keyword names must be in 
capital letters, therefore, if you want to use Grep to inspect these keywords, 
use the @option{-i} option, like the example below.
 
 @example
 $ astfits image_segmented.fits -h0 | grep -i snquant
@@ -14770,99 +13852,65 @@ $ astfits image_segmented.fits -h0 | grep -i snquant
 
 @cindex DS9
 @cindex SAO DS9
-By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions,
-Segment's output will also contain the (technically redundant) input
-dataset and the sky standard deviation dataset (if it wasn't a constant
-number). This can help in visually inspecting the result when viewing the
-images as a ``Multi-extension data cube'' in SAO DS9 for example (see
-@ref{Viewing multiextension FITS images}). You can simply flip through the
-extensions and see the same region of the image and its corresponding
-clumps/object labels. It also makes it easy to feed the output (as one
-file) into MakeCatalog when you intend to make a catalog afterwards (see
-@ref{MakeCatalog}. To remove these redundant extensions from the output
-(for example when designing a pipeline), you can use
-@option{--rawoutput}.
-
-The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into
-@ref{MakeCatalog} to generate a catalog for higher-level analysis. If you
-want to treat each clump separately, you can give a very large value (or
-even a NaN, which will always fail) to the @option{--gthresh} option (for
-example @code{--gthresh=1e10} or @code{--gthresh=nan}), see
-@ref{Segmentation options}.
-
-For a complete definition of clumps and objects, please see Section 3.2 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and
-@ref{Segmentation options}. The clumps are ``true'' local maxima (minima if
-@option{--minima} is called) and their surrounding pixels until a local
-minimum/maximum (caused by noise fluctuations, or another ``true''
-clump). Therefore it may happen that some of the input detections aren't
-covered by clumps at all (very diffuse objects without any strong peak),
-while some objects may contain many clumps. Even in those that have clumps,
-there will be regions that are too diffuse. The diffuse regions (within the
-input detected regions) are given a negative label (-1) to help you
-separate them from the undetected regions (with a value of zero).
-
-Each clump is labeled with respect to its host object. Therefore, if an
-object has three clumps for example, the clumps within it have labels 1, 2
-and 3. As a result, if an initial detected region has multiple objects,
-each with a single clump, all the clumps will have a label of 1. The total
-number of clumps in the dataset is stored in the @code{NCLUMPS} keyword of
-the @code{CLUMPS} extension and printed in the verbose output of Segment
-(when @option{--quiet} is not called).
-
-The @code{OBJECTS} extension of the output will give a positive
-counter/label to every detected pixel in the input. As described in
-Akhlaghi and Ichikawa [2015], the true clumps are grown until a certain
-threshold. If the grown clumps touch other clumps and the connection is
-strong enough, they are considered part of the same @emph{object}. Once
-objects (grown clumps) are identified, they are grown to cover the whole
-detected area.
+By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's 
output will also contain the (technically redundant) input dataset and the sky 
standard deviation dataset (if it wasn't a constant number).
+This can help in visually inspecting the result when viewing the images as a 
``Multi-extension data cube'' in SAO DS9 for example (see @ref{Viewing 
multiextension FITS images}).
+You can simply flip through the extensions and see the same region of the 
image and its corresponding clumps/object labels.
+It also makes it easy to feed the output (as one file) into MakeCatalog when 
you intend to make a catalog afterwards (see @ref{MakeCatalog}.
+To remove these redundant extensions from the output (for example when 
designing a pipeline), you can use @option{--rawoutput}.
+
+The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into 
@ref{MakeCatalog} to generate a catalog for higher-level analysis.
+If you want to treat each clump separately, you can give a very large value 
(or even a NaN, which will always fail) to the @option{--gthresh} option (for 
example @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation 
options}.
+
+For a complete definition of clumps and objects, please see Section 3.2 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and 
@ref{Segmentation options}.
+The clumps are ``true'' local maxima (minima if @option{--minima} is called) 
and their surrounding pixels until a local minimum/maximum (caused by noise 
fluctuations, or another ``true'' clump).
+Therefore it may happen that some of the input detections aren't covered by 
clumps at all (very diffuse objects without any strong peak), while some 
objects may contain many clumps.
+Even in those that have clumps, there will be regions that are too diffuse.
+The diffuse regions (within the input detected regions) are given a negative 
label (-1) to help you separate them from the undetected regions (with a value 
of zero).
+
+Each clump is labeled with respect to its host object.
+Therefore, if an object has three clumps for example, the clumps within it 
have labels 1, 2 and 3.
+As a result, if an initial detected region has multiple objects, each with a 
single clump, all the clumps will have a label of 1.
+The total number of clumps in the dataset is stored in the @code{NCLUMPS} 
keyword of the @code{CLUMPS} extension and printed in the verbose output of 
Segment (when @option{--quiet} is not called).
+
+The @code{OBJECTS} extension of the output will give a positive counter/label 
to every detected pixel in the input.
+As described in Akhlaghi and Ichikawa [2015], the true clumps are grown until 
a certain threshold.
+If the grown clumps touch other clumps and the connection is strong enough, 
they are considered part of the same @emph{object}.
+Once objects (grown clumps) are identified, they are grown to cover the whole 
detected area.
 
 The options to configure the output of Segment are listed below:
 
 @table @option
 @item --continueaftercheck
-Don't abort Segment after producing the check image(s). The usage of this
-option is identical to NoiseChisel's @option{--continueaftercheck} option
-(@ref{NoiseChisel input}). Please see the descriptions there for more.
+Don't abort Segment after producing the check image(s).
+The usage of this option is identical to NoiseChisel's 
@option{--continueaftercheck} option (@ref{NoiseChisel input}).
+Please see the descriptions there for more.
 
 @item --onlyclumps
-Abort Segment after finding true clumps and don't continue with finding
-options. Therefore, no @code{OBJECTS} extension will be present in the
-output. Each true clump in @code{CLUMPS} will get a unique label, but
-diffuse regions will still have a negative value.
+Abort Segment after finding true clumps and don't continue with finding 
options.
+Therefore, no @code{OBJECTS} extension will be present in the output.
+Each true clump in @code{CLUMPS} will get a unique label, but diffuse regions 
will still have a negative value.
 
-To make a catalog of the clumps, the input detection map (where all the
-labels are one) can be fed into @ref{MakeCatalog} along with the input
-detection map to Segment (that only had a value of @code{1} for all
-detected pixels) with @option{--clumpscat}. In this way, MakeCatalog will
-assume all the clumps belong to a single ``object''.
+To make a catalog of the clumps, the input detection map (where all the labels 
are one) can be fed into @ref{MakeCatalog} along with the input detection map 
to Segment (that only had a value of @code{1} for all detected pixels) with 
@option{--clumpscat}.
+In this way, MakeCatalog will assume all the clumps belong to a single 
``object''.
 
 @item --grownclumps
-In the output @code{CLUMPS} extension, store the grown clumps. If a
-detected region contains no clumps or only one clump, then it will be fully
-given a label of @code{1} (no negative valued pixels).
+In the output @code{CLUMPS} extension, store the grown clumps.
+If a detected region contains no clumps or only one clump, then it will be 
fully given a label of @code{1} (no negative valued pixels).
 
 @item --rawoutput
-Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output
-file. Without this option (by default), the first and last extensions of
-the output will the Sky-subtracted input dataset and the Sky standard
-deviation dataset (if it wasn't a number). When the datasets are small,
-these redundant extensions can make it convenient to inspect the results
-visually or feed the output to @ref{MakeCatalog} for
-measurements. Ultimately both the input and Sky standard deviation datasets
-are redundant (you had them before running Segment). When the inputs are
-large/numerous, these extra dataset can be a burden.
+Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output file.
+Without this option (by default), the first and last extensions of the output 
will the Sky-subtracted input dataset and the Sky standard deviation dataset 
(if it wasn't a number).
+When the datasets are small, these redundant extensions can make it convenient 
to inspect the results visually or feed the output to @ref{MakeCatalog} for 
measurements.
+Ultimately both the input and Sky standard deviation datasets are redundant 
(you had them before running Segment).
+When the inputs are large/numerous, these extra dataset can be a burden.
 @end table
 
 @cartouche
 @noindent
 @cindex Compression
-@strong{Save space:} with the @option{--rawoutput}, Segment's output will
-only be two labeled datasets (only containing integers). Since they have no
-noise, such datasets can be compressed very effectively (without any loss
-of data) with exceptionally high compression ratios. You can use the
-following command to compress it with the best ratio:
+@strong{Save space:} with the @option{--rawoutput}, Segment's output will only 
be two labeled datasets (only containing integers).
+Since they have no noise, such datasets can be compressed very effectively 
(without any loss of data) with exceptionally high compression ratios.
+You can use the following command to compress it with the best ratio:
 
 @cindex GNU Gzip
 @example
@@ -14870,10 +13918,7 @@ $ gzip --best segment_output.fits
 @end example
 
 @noindent
-The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's
-programs directly, without having to decompress it separately (it will just
-take them a little longer, because they have to decompress it internally
-before use).
+The resulting @file{.fits.gz} file can then be fed into any of Gnuastro's 
programs directly, without having to decompress it separately (it will just 
take them a little longer, because they have to decompress it internally before 
use).
 @end cartouche
 
 
@@ -14884,41 +13929,22 @@ before use).
 @node MakeCatalog, Match, Segment, Data analysis
 @section MakeCatalog
 
-At the lowest level, a dataset (for example an image) is just a collection
-of values, placed after each other in any number of dimensions (for example
-an image is a 2D dataset). Each data-element (pixel) just has two
-properties: its position (relative to the rest) and its value. In
-higher-level analysis, an entire dataset (an image for example) is rarely
-treated as a singular entity@footnote{You can derive the over-all
-properties of a complete dataset (1D table column, 2D image, or 3D
-data-cube) treated as a single entity with Gnuastro's Statistics program
-(see @ref{Statistics}).}. You usually want to know/measure the properties
-of the (separate) scientifically interesting targets that are embedded in
-it. For example the magnitudes, positions and elliptical properties of the
-galaxies that are in the image.
-
-MakeCatalog is Gnuastro's program for localized measurements over a
-dataset. In other words, MakeCatalog is Gnuastro's program to convert
-low-level datasets (like images), to high level catalogs. The role of
-MakeCatalog in a scientific analysis and the benefits of its model (where
-detection/segmentation is separated from measurement) is discussed in
-@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A
-published paper cannot undergo any more change, so this manual is the
-definitive guide.} and summarized in @ref{Detection and catalog
-production}. We strongly recommend reading this short paper for a better
-understanding of this methodology. Understanding the effective usage of
-MakeCatalog, will thus also help effective use of other (lower-level)
-Gnuastro's programs like @ref{NoiseChisel} or @ref{Segment}.
-
-It is important to define your regions of interest for measurements
-@emph{before} running MakeCatalog. MakeCatalog is specialized in doing
-measurements accurately and efficiently. Therefore MakeCatalog will not do
-detection, segmentation, or defining apertures on requested positions in
-your dataset. Following Gnuastro's modularity principle, there are separate
-and highly specialized and customizable programs in Gnuastro for these
-other jobs as shown below (for a usage example in a real-world analysis,
-see @ref{General program usage tutorial} and @ref{Detecting large extended
-targets}).
+At the lowest level, a dataset (for example an image) is just a collection of 
values, placed after each other in any number of dimensions (for example an 
image is a 2D dataset).
+Each data-element (pixel) just has two properties: its position (relative to 
the rest) and its value.
+In higher-level analysis, an entire dataset (an image for example) is rarely 
treated as a singular entity@footnote{You can derive the over-all properties of 
a complete dataset (1D table column, 2D image, or 3D data-cube) treated as a 
single entity with Gnuastro's Statistics program (see @ref{Statistics}).}.
+You usually want to know/measure the properties of the (separate) 
scientifically interesting targets that are embedded in it.
+For example the magnitudes, positions and elliptical properties of the 
galaxies that are in the image.
+
+MakeCatalog is Gnuastro's program for localized measurements over a dataset.
+In other words, MakeCatalog is Gnuastro's program to convert low-level 
datasets (like images), to high level catalogs.
+The role of MakeCatalog in a scientific analysis and the benefits of its model 
(where detection/segmentation is separated from measurement) is discussed in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}@footnote{A published 
paper cannot undergo any more change, so this manual is the definitive guide.} 
and summarized in @ref{Detection and catalog production}.
+We strongly recommend reading this short paper for a better understanding of 
this methodology.
+Understanding the effective usage of MakeCatalog, will thus also help 
effective use of other (lower-level) Gnuastro's programs like @ref{NoiseChisel} 
or @ref{Segment}.
+
+It is important to define your regions of interest for measurements 
@emph{before} running MakeCatalog.
+MakeCatalog is specialized in doing measurements accurately and efficiently.
+Therefore MakeCatalog will not do detection, segmentation, or defining 
apertures on requested positions in your dataset.
+Following Gnuastro's modularity principle, there are separate and highly 
specialized and customizable programs in Gnuastro for these other jobs as shown 
below (for a usage example in a real-world analysis, see @ref{General program 
usage tutorial} and @ref{Detecting large extended targets}).
 
 @itemize
 @item
@@ -14934,37 +13960,22 @@ targets}).
 @ref{MakeProfiles}: Aperture creation for known positions.
 @end itemize
 
-These programs will/can return labeled dataset(s) to be fed into
-MakeCatalog. A labeled dataset for measurement has the same size/dimensions
-as the input, but with integer valued pixels that have the label/counter
-for each sub-set of pixels that must be measured together. For example all
-the pixels covering one galaxy in an image, get the same label.
-
-The requested measurements are then done on similarly labeled pixels. The
-final result is a catalog where each row corresponds to the measurements on
-pixels with a specific label. For example the flux weighted average position
-of all the pixels with a label of 42 will be written into the 42nd row of
-the output catalog/table's central position column@footnote{See
-@ref{Measuring elliptical parameters} for a discussion on this and the
-derivation of positional parameters, which includes the
-center.}. Similarly, the sum of all these pixels will be the 42nd row in
-the brightness column, etc. Pixels with labels equal to, or smaller
-than, zero will be ignored by MakeCatalog. In other words, the number of
-rows in MakeCatalog's output is already known before running it (the
-maximum value of the labeled dataset).
-
-Before getting into the details of running MakeCatalog (in @ref{Invoking
-astmkcatalog}, we'll start with a discussion on the basics of its approach
-to separating detection from measurements in @ref{Detection and catalog
-production}. A very important factor in any measurement is understanding
-its validity range, or limits. Therefore in @ref{Quantifying measurement
-limits}, we'll discuss how to estimate the reliability of the detection and
-basic measurements. This section will continue with a derivation of
-elliptical parameters from the labeled datasets in @ref{Measuring
-elliptical parameters}. For those who feel MakeCatalog's existing
-measurements/columns aren't enough and would like to add further
-measurements, in @ref{Adding new columns to MakeCatalog}, a checklist of
-steps is provided for readily adding your own new measurements/columns.
+These programs will/can return labeled dataset(s) to be fed into MakeCatalog.
+A labeled dataset for measurement has the same size/dimensions as the input, 
but with integer valued pixels that have the label/counter for each sub-set of 
pixels that must be measured together.
+For example all the pixels covering one galaxy in an image, get the same label.
+
+The requested measurements are then done on similarly labeled pixels.
+The final result is a catalog where each row corresponds to the measurements 
on pixels with a specific label.
+For example the flux weighted average position of all the pixels with a label 
of 42 will be written into the 42nd row of the output catalog/table's central 
position column@footnote{See @ref{Measuring elliptical parameters} for a 
discussion on this and the derivation of positional parameters, which includes 
the center.}.
+Similarly, the sum of all these pixels will be the 42nd row in the brightness 
column, etc.
+Pixels with labels equal to, or smaller than, zero will be ignored by 
MakeCatalog.
+In other words, the number of rows in MakeCatalog's output is already known 
before running it (the maximum value of the labeled dataset).
+
+Before getting into the details of running MakeCatalog (in @ref{Invoking 
astmkcatalog}, we'll start with a discussion on the basics of its approach to 
separating detection from measurements in @ref{Detection and catalog 
production}.
+A very important factor in any measurement is understanding its validity 
range, or limits.
+Therefore in @ref{Quantifying measurement limits}, we'll discuss how to 
estimate the reliability of the detection and basic measurements.
+This section will continue with a derivation of elliptical parameters from the 
labeled datasets in @ref{Measuring elliptical parameters}.
+For those who feel MakeCatalog's existing measurements/columns aren't enough 
and would like to add further measurements, in @ref{Adding new columns to 
MakeCatalog}, a checklist of steps is provided for readily adding your own new 
measurements/columns.
 
 @menu
 * Detection and catalog production::  Discussing why/how to treat these 
separately
@@ -14977,68 +13988,42 @@ steps is provided for readily adding your own new 
measurements/columns.
 @node Detection and catalog production, Quantifying measurement limits, 
MakeCatalog, MakeCatalog
 @subsection Detection and catalog production
 
-Most existing common tools in low-level astronomical data-analysis (for
-example
-SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}})
-merge the two processes of detection and measurement (catalog production)
-in one program. However, in light of Gnuastro's modularized approach
-(modeled on the Unix system) detection is separated from measurements and
-catalog production. This modularity is therefore new to many experienced
-astronomers and deserves a short review here. Further discussion on the
-benefits of this methodology can be seen in
-@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
-
-As discussed in the introduction of @ref{MakeCatalog}, detection
-(identifying which pixels to do measurements on) can be done with different
-programs. Their outputs (a labeled dataset) can be directly fed into
-MakeCatalog to do the measurements and write the result as a
-catalog/table. Beyond that, Gnuastro's modular approach has many benefits
-that will become clear as you get more experienced in astronomical data
-analysis and want to be more creative in using your valuable data for the
-exciting scientific project you are working on. In short the reasons for
-this modularity can be classified as below:
+Most existing common tools in low-level astronomical data-analysis (for 
example 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
+However, in light of Gnuastro's modularized approach (modeled on the Unix 
system) detection is separated from measurements and catalog production.
+This modularity is therefore new to many experienced astronomers and deserves 
a short review here.
+Further discussion on the benefits of this methodology can be seen in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
+
+As discussed in the introduction of @ref{MakeCatalog}, detection (identifying 
which pixels to do measurements on) can be done with different programs.
+Their outputs (a labeled dataset) can be directly fed into MakeCatalog to do 
the measurements and write the result as a catalog/table.
+Beyond that, Gnuastro's modular approach has many benefits that will become 
clear as you get more experienced in astronomical data analysis and want to be 
more creative in using your valuable data for the exciting scientific project 
you are working on.
+In short the reasons for this modularity can be classified as below:
 
 @itemize
 
 @item
-Simplicity/robustness of independent, modular tools: making a catalog is a
-logically separate process from labeling (detection, segmentation, or
-aperture production). A user might want to do certain operations on the
-labeled regions before creating a catalog for them. Another user might want
-the properties of the same pixels/objects in another image (another filter
-for example) to measure the colors or SED fittings.
-
-Here is an example of doing both: suppose you have images in various
-broad band filters at various resolutions and orientations. The image
-of one color will thus not lie exactly on another or even be in the
-same scale. However, it is imperative that the same pixels be used in
-measuring the colors of galaxies.
-
-To solve the problem, NoiseChisel can be run on the reference image to
-generate the labeled detection image. Afterwards, the labeled image can be
-warped into the grid of the other color (using @ref{Warp}). MakeCatalog
-will then generate the same catalog for both colors (with the different
-labeled images). It is currently customary to warp the images to the same
-pixel grid, however, modification of the scientific dataset is very harmful
-for the data and creates correlated noise. It is much more accurate to do
-the transformations on the labeled image.
+Simplicity/robustness of independent, modular tools: making a catalog is a 
logically separate process from labeling (detection, segmentation, or aperture 
production).
+A user might want to do certain operations on the labeled regions before 
creating a catalog for them.
+Another user might want the properties of the same pixels/objects in another 
image (another filter for example) to measure the colors or SED fittings.
+
+Here is an example of doing both: suppose you have images in various broad 
band filters at various resolutions and orientations.
+The image of one color will thus not lie exactly on another or even be in the 
same scale.
+However, it is imperative that the same pixels be used in measuring the colors 
of galaxies.
+
+To solve the problem, NoiseChisel can be run on the reference image to 
generate the labeled detection image.
+Afterwards, the labeled image can be warped into the grid of the other color 
(using @ref{Warp}).
+MakeCatalog will then generate the same catalog for both colors (with the 
different labeled images).
+It is currently customary to warp the images to the same pixel grid, however, 
modification of the scientific dataset is very harmful for the data and creates 
correlated noise.
+It is much more accurate to do the transformations on the labeled image.
 
 @item
-Complexity of a monolith: Adding in a catalog functionality to the detector
-program will add several more steps (and many more options) to its
-processing that can equally well be done outside of it. This makes
-following what the program does harder for the users and developers, it can
-also potentially add many bugs.
-
-As an example, if the parameter you want to measure over one profile is not
-provided by the developers of MakeCatalog. You can simply open this tiny
-little program and add your desired calculation easily. This process is
-discussed in @ref{Adding new columns to MakeCatalog}. However, if making a
-catalog was part of NoiseChisel for example, adding a new
-column/measurement would require a lot of energy to understand all the
-steps and internal structures of that huge program. It might even be so
-intertwined with its processing, that adding new columns might cause
-problems/bugs in its primary job (detection).
+Complexity of a monolith: Adding in a catalog functionality to the detector 
program will add several more steps (and many more options) to its processing 
that can equally well be done outside of it.
+This makes following what the program does harder for the users and 
developers, it can also potentially add many bugs.
+
+As an example, if the parameter you want to measure over one profile is not 
provided by the developers of MakeCatalog.
+You can simply open this tiny little program and add your desired calculation 
easily.
+This process is discussed in @ref{Adding new columns to MakeCatalog}.
+However, if making a catalog was part of NoiseChisel for example, adding a new 
column/measurement would require a lot of energy to understand all the steps 
and internal structures of that huge program.
+It might even be so intertwined with its processing, that adding new columns 
might cause problems/bugs in its primary job (detection).
 
 @end itemize
 
@@ -15056,157 +14041,93 @@ problems/bugs in its primary job (detection).
 @cindex Object magnitude limit
 @cindex Limit, object/clump magnitude
 @cindex Magnitude, object/clump detection limit
-No measurement on a real dataset can be perfect: you can only reach a
-certain level/limit of accuracy. Therefore, a meaningful (scientific)
-analysis requires an understanding of these limits for the dataset and your
-analysis tools: different datasets have different noise properties and
-different detection methods (one method/algorithm/software that is run with
-a different set of parameters is considered as a different detection
-method) will have different abilities to detect or measure certain kinds of
-signal (astronomical objects) and their properties in the dataset. Hence,
-quantifying the detection and measurement limitations with a particular
-dataset and analysis tool is the most crucial/critical aspect of any
-high-level analysis.
-
-Here, we'll review some of the most general limits that are important in
-any astronomical data analysis and how MakeCatalog makes it easy to find
-them. Depending on the higher-level analysis, there are more tests that
-must be done, but these are relatively low-level and usually necessary in
-most cases. In astronomy, it is common to use the magnitude (a unit-less
-scale) and physical units, see @ref{Flux Brightness and
-magnitude}. Therefore the measurements discussed here are commonly used in
-units of magnitudes.
+No measurement on a real dataset can be perfect: you can only reach a certain 
level/limit of accuracy.
+Therefore, a meaningful (scientific) analysis requires an understanding of 
these limits for the dataset and your analysis tools: different datasets have 
different noise properties and different detection methods (one 
method/algorithm/software that is run with a different set of parameters is 
considered as a different detection method) will have different abilities to 
detect or measure certain kinds of signal (astronomical objects) and their 
properties in the dataset.
+Hence, quantifying the detection and measurement limitations with a particular 
dataset and analysis tool is the most crucial/critical aspect of any high-level 
analysis.
+
+Here, we'll review some of the most general limits that are important in any 
astronomical data analysis and how MakeCatalog makes it easy to find them.
+Depending on the higher-level analysis, there are more tests that must be 
done, but these are relatively low-level and usually necessary in most cases.
+In astronomy, it is common to use the magnitude (a unit-less scale) and 
physical units, see @ref{Flux Brightness and magnitude}.
+Therefore the measurements discussed here are commonly used in units of 
magnitudes.
 
 @table @asis
 
 @item Surface brightness limit (of whole dataset)
 @cindex Surface brightness
-As we make more observations on one region of the sky, and add the
-observations into one dataset, the signal and noise both increase. However,
-the signal increase much faster than the noise: assuming you add @mymath{N}
-datasets with equal exposure times, the signal will increases as a multiple
-of @mymath{N}, while noise increases as @mymath{\sqrt{N}}. Thus this
-increases the signal-to-noise ratio. Qualitatively, fainter (per pixel)
-parts of the objects/signal in the image will become more
-visible/detectable. The noise-level is known as the dataset's surface
-brightness limit.
-
-You can think of the noise as muddy water that is completely covering a
-flat ground@footnote{The ground is the sky value in this analogy, see
-@ref{Sky value}. Note that this analogy only holds for a flat sky value
-across the surface of the image or ground.}. The signal (or astronomical
-objects in this analogy) will be summits/hills that start from the flat sky
-level (under the muddy water) and can sometimes reach outside of the muddy
-water. Let's assume that in your first observation the muddy water has just
-been stirred and you can't see anything through it. As you wait and make
-more observations/exposures, the mud settles down and the @emph{depth} of
-the transparent water increases, making the summits visible. As the depth
-of clear water increases, the parts of the hills with lower heights (parts
-with lower surface brightness) can be seen more clearly. In this analogy,
-height (from the ground) is @emph{surface brightness}@footnote{Note that
-this muddy water analogy is not perfect, because while the water-level
-remains the same all over a peak, in data analysis, the Poisson noise
-increases with the level of data.} and the height of the muddy water is
-your surface brightness limit.
+As we make more observations on one region of the sky, and add the 
observations into one dataset, the signal and noise both increase.
+However, the signal increase much faster than the noise: assuming you add 
@mymath{N} datasets with equal exposure times, the signal will increases as a 
multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
+Thus this increases the signal-to-noise ratio.
+Qualitatively, fainter (per pixel) parts of the objects/signal in the image 
will become more visible/detectable.
+The noise-level is known as the dataset's surface brightness limit.
+
+You can think of the noise as muddy water that is completely covering a flat 
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky 
value}.
+Note that this analogy only holds for a flat sky value across the surface of 
the image or ground.}.
+The signal (or astronomical objects in this analogy) will be summits/hills 
that start from the flat sky level (under the muddy water) and can sometimes 
reach outside of the muddy water.
+Let's assume that in your first observation the muddy water has just been 
stirred and you can't see anything through it.
+As you wait and make more observations/exposures, the mud settles down and the 
@emph{depth} of the transparent water increases, making the summits visible.
+As the depth of clear water increases, the parts of the hills with lower 
heights (parts with lower surface brightness) can be seen more clearly.
+In this analogy, height (from the ground) is @emph{surface 
brightness}@footnote{Note that this muddy water analogy is not perfect, because 
while the water-level remains the same all over a peak, in data analysis, the 
Poisson noise increases with the level of data.} and the height of the muddy 
water is your surface brightness limit.
 
 @cindex Data's depth
-The outputs of NoiseChisel include the Sky standard deviation
-(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated
-from the undetected pixels in each tile, see @ref{Tessellation} and
-@ref{NoiseChisel output}. Let's take @mymath{\sigma_m} as the median
-@mymath{\sigma} over the successful meshes in the image (prior to
-interpolation or smoothing).
-
-On different instruments, pixels have different physical sizes (for example
-in micro-meters, or spatial angle over the sky). Nevertheless, a pixel is
-our unit of data collection. In other words, while quantifying the noise,
-the physical or projected size of the pixels is irrelevant. We thus define
-the Surface brightness limit or @emph{depth}, in units of magnitude/pixel,
-of a data-set, with zeropoint magnitude @mymath{z}, with the @mymath{n}th
-multiple of @mymath{\sigma_m} as (see @ref{Flux Brightness and magnitude}):
+The outputs of NoiseChisel include the Sky standard deviation 
(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated from 
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel 
output}.
+Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful 
meshes in the image (prior to interpolation or smoothing).
+
+On different instruments, pixels have different physical sizes (for example in 
micro-meters, or spatial angle over the sky).
+Nevertheless, a pixel is our unit of data collection.
+In other words, while quantifying the noise, the physical or projected size of 
the pixels is irrelevant.
+We thus define the Surface brightness limit or @emph{depth}, in units of 
magnitude/pixel, of a data-set, with zeropoint magnitude @mymath{z}, with the 
@mymath{n}th multiple of @mymath{\sigma_m} as (see @ref{Flux Brightness and 
magnitude}):
 
 @dispmath{SB_{\rm Pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z}
 
 @cindex XDF survey
 @cindex CANDELS survey
 @cindex eXtreme Deep Field (XDF) survey
-As an example, the XDF survey covers part of the sky that the Hubble space
-telescope has observed the most (for 85 orbits) and is consequently very
-small (@mymath{\sim4} arcmin@mymath{^2}). On the other hand, the CANDELS
-survey, is one of the widest multi-color surveys covering several fields
-(about 720 arcmin@mymath{^2}) but its deepest fields have only 9 orbits
-observation. The depth of the XDF and CANDELS-deep surveys in the near
-infrared WFC3/F160W filter are respectively 34.40 and 32.45
-magnitudes/pixel. In a single orbit image, this same field has a depth of
-31.32. Recall that a larger magnitude corresponds to less brightness.
-
-The low-level magnitude/pixel measurement above is only useful when all the
-datasets you want to use belong to one instrument (telescope and
-camera). However, you will often find yourself using datasets from various
-instruments with different pixel scales (projected pixel sizes). If we know
-the pixel scale, we can obtain a more easily comparable surface brightness
-limit in units of: magnitude/arcsec@mymath{^2}. Let's assume that the
-dataset has a zeropoint value of @mymath{z}, and every pixel is @mymath{p}
-arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels that cover an
-area of @mymath{A} arcsec@mymath{^2}). If the surface brightness is desired
-at the @mymath{n}th multiple of @mymath{\sigma_m}, the following equation
-(in units of magnitudes per A arcsec@mymath{^2}) can be used:
-
-@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over
-p}\right)+z}}
-
-Note that this is just an extrapolation of the per-pixel measurement
-@mymath{\sigma_m}. So it should be used with extreme care: for example the
-dataset must have an approximately flat depth or noise properties
-overall. A more accurate measure for each detection over the dataset is
-known as the @emph{upper-limit magnitude} which actually uses random
-positioning of each detection's area/footprint (see below). It doesn't
-extrapolate and even accounts for correlated noise patterns in relation to
-that detection. Therefore, the upper-limit magnitude is a much better
-measure of your dataset's surface brightness limit for each particular
-object.
-
-MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and
-@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the
-output catalog(s). Just note that @mymath{SB_{\rm Projected}} is only
-calculated if the input has World Coordinate System (WCS).
+As an example, the XDF survey covers part of the sky that the Hubble space 
telescope has observed the most (for 85 orbits) and is consequently very small 
(@mymath{\sim4} arcmin@mymath{^2}).
+On the other hand, the CANDELS survey, is one of the widest multi-color 
surveys covering several fields (about 720 arcmin@mymath{^2}) but its deepest 
fields have only 9 orbits observation.
+The depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W 
filter are respectively 34.40 and 32.45 magnitudes/pixel.
+In a single orbit image, this same field has a depth of 31.32.
+Recall that a larger magnitude corresponds to less brightness.
+
+The low-level magnitude/pixel measurement above is only useful when all the 
datasets you want to use belong to one instrument (telescope and camera).
+However, you will often find yourself using datasets from various instruments 
with different pixel scales (projected pixel sizes).
+If we know the pixel scale, we can obtain a more easily comparable surface 
brightness limit in units of: magnitude/arcsec@mymath{^2}.
+Let's assume that the dataset has a zeropoint value of @mymath{z}, and every 
pixel is @mymath{p} arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels 
that cover an area of @mymath{A} arcsec@mymath{^2}).
+If the surface brightness is desired at the @mymath{n}th multiple of 
@mymath{\sigma_m}, the following equation (in units of magnitudes per A 
arcsec@mymath{^2}) can be used:
+
+@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over 
p}\right)+z}}
+
+Note that this is just an extrapolation of the per-pixel measurement 
@mymath{\sigma_m}.
+So it should be used with extreme care: for example the dataset must have an 
approximately flat depth or noise properties overall.
+A more accurate measure for each detection over the dataset is known as the 
@emph{upper-limit magnitude} which actually uses random positioning of each 
detection's area/footprint (see below).
+It doesn't extrapolate and even accounts for correlated noise patterns in 
relation to that detection.
+Therefore, the upper-limit magnitude is a much better measure of your 
dataset's surface brightness limit for each particular object.
+
+MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and 
@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the output 
catalog(s).
+Just note that @mymath{SB_{\rm Projected}} is only calculated if the input has 
World Coordinate System (WCS).
 
 @item Completeness limit (of each detection)
 @cindex Completeness
-As the surface brightness of the objects decreases, the ability to detect
-them will also decrease. An important statistic is thus the fraction of
-objects of similar morphology and brightness that will be identified with
-our detection algorithm/parameters in the given image. This fraction is
-known as completeness. For brighter objects, completeness is 1: all bright
-objects that might exist over the image will be detected. However, as we go
-to objects of lower overall surface brightness, we will fail to detect
-some, and gradually we are not able to detect anything any more. For a
-given profile, the magnitude where the completeness drops below a certain
-level (usually above @mymath{90\%}) is known as the completeness limit.
+As the surface brightness of the objects decreases, the ability to detect them 
will also decrease.
+An important statistic is thus the fraction of objects of similar morphology 
and brightness that will be identified with our detection algorithm/parameters 
in the given image.
+This fraction is known as completeness.
+For brighter objects, completeness is 1: all bright objects that might exist 
over the image will be detected.
+However, as we go to objects of lower overall surface brightness, we will fail 
to detect some, and gradually we are not able to detect anything any more.
+For a given profile, the magnitude where the completeness drops below a 
certain level (usually above @mymath{90\%}) is known as the completeness limit.
 
 @cindex Purity
 @cindex False detections
 @cindex Detections false
-Another important parameter in measuring completeness is purity: the
-fraction of true detections to all true detections. In effect purity is the
-measure of contamination by false detections: the higher the purity, the
-lower the contamination. Completeness and purity are anti-correlated: if we
-can allow a large number of false detections (that we might be able to
-remove by other means), we can significantly increase the completeness
-limit.
-
-One traditional way to measure the completeness and purity of a given
-sample is by embedding mock profiles in regions of the image with no
-detection. However in such a study we must be really careful to choose
-model profiles as similar to the target of interest as possible.
+Another important parameter in measuring completeness is purity: the fraction 
of true detections to all true detections.
+In effect purity is the measure of contamination by false detections: the 
higher the purity, the lower the contamination.
+Completeness and purity are anti-correlated: if we can allow a large number of 
false detections (that we might be able to remove by other means), we can 
significantly increase the completeness limit.
+
+One traditional way to measure the completeness and purity of a given sample 
is by embedding mock profiles in regions of the image with no detection.
+However in such a study we must be really careful to choose model profiles as 
similar to the target of interest as possible.
 
 @item Magnitude measurement error (of each detection)
-Any measurement has an error and this includes the derived magnitude for an
-object. Note that this value is only meaningful when the object's magnitude
-is brighter than the upper-limit magnitude (see the next items in this
-list). As discussed in @ref{Flux Brightness and magnitude}, the magnitude
-(@mymath{M}) of an object with brightness @mymath{B} and Zeropoint
-magnitude @mymath{z} can be written as:
+Any measurement has an error and this includes the derived magnitude for an 
object.
+Note that this value is only meaningful when the object's magnitude is 
brighter than the upper-limit magnitude (see the next items in this list).
+As discussed in @ref{Flux Brightness and magnitude}, the magnitude 
(@mymath{M}) of an object with brightness @mymath{B} and Zeropoint magnitude 
@mymath{z} can be written as:
 
 @dispmath{M=-2.5\log_{10}(B)+z}
 
@@ -15216,83 +14137,49 @@ Calculating the derivative with respect to 
@mymath{B}, we get:
 @dispmath{{dM\over dB} = {-2.5\over {B\times ln(10)}}}
 
 @noindent
-From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can
-write:
+From the Tailor series (@mymath{\Delta{M}=dM/dB\times\Delta{B}}), we can write:
 
 @dispmath{\Delta{M} = \left|{-2.5\over ln(10)}\right|\times{\Delta{B}\over{B}}}
 
 @noindent
-But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise
-ratio (@mymath{S/N}), so we can write the error in magnitude in terms of
-the signal-to-noise ratio:
+But, @mymath{\Delta{B}/B} is just the inverse of the Signal-to-noise ratio 
(@mymath{S/N}), so we can write the error in magnitude in terms of the 
signal-to-noise ratio:
 
 @dispmath{ \Delta{M} = {2.5\over{S/N\times ln(10)}} }
 
-MakeCatalog uses this relation to estimate the magnitude errors. The
-signal-to-noise ratio is calculated in different ways for clumps and
-objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa
-[2015]}), but this single equation can be used to estimate the measured
-magnitude error afterwards for any type of target.
+MakeCatalog uses this relation to estimate the magnitude errors.
+The signal-to-noise ratio is calculated in different ways for clumps and 
objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}), but this single equation can be used to estimate the measured 
magnitude error afterwards for any type of target.
 
 @item Upper limit magnitude (of each detection)
-Due to the noisy nature of data, it is possible to get arbitrarily low
-values for a faint object's brightness (or arbitrarily high
-@emph{magnitudes}). Given the scatter caused by the dataset's noise, values
-fainter than a certain level are meaningless: another similar depth
-observation will give a radically different value.
-
-For example, while the depth of the image is 32 magnitudes/pixel, a
-measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel
-object is clearly unreliable. In another similar depth image, we might
-measure a magnitude of 30 for it, and yet another might give
-33. Furthermore, due to the noise scatter so close to the depth of the
-data-set, the total brightness might actually get measured as a negative
-value, so no magnitude can be defined (recall that a magnitude is a base-10
-logarithm). This problem usually becomes relevant when the detection labels
-were not derived from the values being measured (for example when you are
-estimating colors, see @ref{MakeCatalog}).
+Due to the noisy nature of data, it is possible to get arbitrarily low values 
for a faint object's brightness (or arbitrarily high @emph{magnitudes}).
+Given the scatter caused by the dataset's noise, values fainter than a certain 
level are meaningless: another similar depth observation will give a radically 
different value.
+
+For example, while the depth of the image is 32 magnitudes/pixel, a 
measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel object is 
clearly unreliable.
+In another similar depth image, we might measure a magnitude of 30 for it, and 
yet another might give 33.
+Furthermore, due to the noise scatter so close to the depth of the data-set, 
the total brightness might actually get measured as a negative value, so no 
magnitude can be defined (recall that a magnitude is a base-10 logarithm).
+This problem usually becomes relevant when the detection labels were not 
derived from the values being measured (for example when you are estimating 
colors, see @ref{MakeCatalog}).
 
 @cindex Upper limit magnitude
 @cindex Magnitude, upper limit
-Using such unreliable measurements will directly affect our analysis, so we
-must not use the raw measurements. But how can we know how reliable a
-measurement on a given dataset is?
-
-When we confront such unreasonably faint magnitudes, there is one thing we
-can deduce: that if something actually exists here (possibly buried deep
-under the noise), it's inherent magnitude is fainter than an @emph{upper
-limit magnitude}. To find this upper limit magnitude, we place the object's
-footprint (segmentation map) over random parts of the image where there are
-no detections, so we only have pure (possibly correlated) noise, along with
-undetected objects. Doing this a large number of times will give us a
-distribution of brightness values. The standard deviation (@mymath{\sigma})
-of that distribution can be used to quantify the upper limit magnitude.
+Using such unreliable measurements will directly affect our analysis, so we 
must not use the raw measurements.
+But how can we know how reliable a measurement on a given dataset is?
+
+When we confront such unreasonably faint magnitudes, there is one thing we can 
deduce: that if something actually exists here (possibly buried deep under the 
noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
+To find this upper limit magnitude, we place the object's footprint 
(segmentation map) over random parts of the image where there are no 
detections, so we only have pure (possibly correlated) noise, along with 
undetected objects.
+Doing this a large number of times will give us a distribution of brightness 
values.
+The standard deviation (@mymath{\sigma}) of that distribution can be used to 
quantify the upper limit magnitude.
 
 @cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular
-apertures (for example with a diameter of @mymath{N} arc-seconds). Hence,
-the upper limit was like the depth discussed above: one value for the whole
-image. The problem with this simplified approach is that the number of
-pixels in the aperture directly affects the final distribution and thus
-magnitude. Also the image correlated noise might actually create certain
-patters, so the shape of the object can also affect the final result
-result. Fortunately, with the much more advanced hardware and software of
-today, we can make customized segmentation maps for each object.
-
-When requested, MakeCatalog will randomly place each target's footprint
-over the dataset as described above and estimate the resulting
-distribution's properties (like the upper limit magnitude). The procedure
-is fully configurable with the options in @ref{Upper-limit settings}. If
-one value for the whole image is required, you can either use the surface
-brightness limit above or make a circular aperture and feed it into
-MakeCatalog to request an upper-limit magnitude for it@footnote{If you
-intend to make apertures manually and not use a detection map (for example
-from @ref{Segment}), don't forget to use the @option{--upmaskfile} to give
-NoiseChisel's output (or any a binary map, marking detected pixels, see
-@ref{NoiseChisel output}) as a mask. Otherwise, the footprints may randomly
-fall over detections, giving highly skewed distributions, with wrong
-upper-limit distributions. See The description of @option{--upmaskfile} in
-@ref{Upper-limit settings} for more.}.
+Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds).
+Hence, the upper limit was like the depth discussed above: one value for the 
whole image.
+The problem with this simplified approach is that the number of pixels in the 
aperture directly affects the final distribution and thus magnitude.
+Also the image correlated noise might actually create certain patters, so the 
shape of the object can also affect the final result result.
+Fortunately, with the much more advanced hardware and software of today, we 
can make customized segmentation maps for each object.
+
+When requested, MakeCatalog will randomly place each target's footprint over 
the dataset as described above and estimate the resulting distribution's 
properties (like the upper limit magnitude).
+The procedure is fully configurable with the options in @ref{Upper-limit 
settings}.
+If one value for the whole image is required, you can either use the surface 
brightness limit above or make a circular aperture and feed it into MakeCatalog 
to request an upper-limit magnitude for it@footnote{If you intend to make 
apertures manually and not use a detection map (for example from 
@ref{Segment}), don't forget to use the @option{--upmaskfile} to give 
NoiseChisel's output (or any a binary map, marking detected pixels, see 
@ref{NoiseChisel output}) as a mask.
+Otherwise, the footprints may randomly fall over detections, giving highly 
skewed distributions, with wrong upper-limit distributions.
+See The description of @option{--upmaskfile} in @ref{Upper-limit settings} for 
more.}.
 
 @end table
 
@@ -15302,33 +14189,25 @@ upper-limit distributions. See The description of 
@option{--upmaskfile} in
 @node Measuring elliptical parameters, Adding new columns to MakeCatalog, 
Quantifying measurement limits, MakeCatalog
 @subsection Measuring elliptical parameters
 
-The shape or morphology of a target is one of the most commonly desired
-parameters of a target. Here, we will review the derivation of the most
-basic/simple morphological parameters: the elliptical parameters for a set
-of labeled pixels. The elliptical parameters are: the (semi-)major axis,
-the (semi-)minor axis and the position angle along with the central
-position of the profile. The derivations below follow the SExtractor manual
-derivations with some added explanations for easier reading.
+The shape or morphology of a target is one of the most commonly desired 
parameters of a target.
+Here, we will review the derivation of the most basic/simple morphological 
parameters: the elliptical parameters for a set of labeled pixels.
+The elliptical parameters are: the (semi-)major axis, the (semi-)minor axis 
and the position angle along with the central position of the profile.
+The derivations below follow the SExtractor manual derivations with some added 
explanations for easier reading.
 
 @cindex Moments
-Let's begin with one dimension for simplicity: Assume we have a set of
-@mymath{N} values @mymath{B_i} (for example showing the spatial
-distribution of a target's brightness), each at position @mymath{x_i}. The
-simplest parameter we can define is the geometric center of the object
-(@mymath{x_g}) (ignoring the brightness values):
-@mymath{x_g=(\sum_ix_i)/N}. @emph{Moments} are defined to incorporate both
-the value (brightness) and position of the data. The first moment can be
-written as:
+Let's begin with one dimension for simplicity: Assume we have a set of 
@mymath{N} values @mymath{B_i} (for example showing the spatial distribution of 
a target's brightness), each at position @mymath{x_i}.
+The simplest parameter we can define is the geometric center of the object 
(@mymath{x_g}) (ignoring the brightness values): @mymath{x_g=(\sum_ix_i)/N}.
+@emph{Moments} are defined to incorporate both the value (brightness) and 
position of the data.
+The first moment can be written as:
 
 @dispmath{\overline{x}={\sum_iB_ix_i \over \sum_iB_i}}
 
 @cindex Variance
 @cindex Second moment
 @noindent
-This is essentially the weighted (by @mymath{B_i}) mean position. The
-geometric center (@mymath{x_g}, defined above) is a special case of this
-with all @mymath{B_i=1}. The second moment is essentially the variance of
-the distribution:
+This is essentially the weighted (by @mymath{B_i}) mean position.
+The geometric center (@mymath{x_g}, defined above) is a special case of this 
with all @mymath{B_i=1}.
+The second moment is essentially the variance of the distribution:
 
 @dispmath{\overline{x^2}\equiv{\sum_iB_i(x_i-\overline{x})^2 \over
         \sum_iB_i} = {\sum_iB_ix_i^2 \over \sum_iB_i} -
@@ -15337,56 +14216,34 @@ the distribution:
 
 @cindex Standard deviation
 @noindent
-The last step was done from the definition of @mymath{\overline{x}}. Hence,
-the square root of @mymath{\overline{x^2}} is the spatial standard
-deviation (along the one-dimension) of this particular brightness
-distribution (@mymath{B_i}). Crudely (or qualitatively), you can think of
-its square root as the distance (from @mymath{\overline{x}}) which contains
-a specific amount of the flux (depending on the @mymath{B_i}
-distribution). Similar to the first moment, the geometric second moment can
-be found by setting all @mymath{B_i=1}. So while the first moment
-quantified the position of the brightness distribution, the second moment
-quantifies how that brightness is dispersed about the first moment. In
-other words, it quantifies how ``sharp'' the object's image is.
+The last step was done from the definition of @mymath{\overline{x}}.
+Hence, the square root of @mymath{\overline{x^2}} is the spatial standard 
deviation (along the one-dimension) of this particular brightness distribution 
(@mymath{B_i}).
+Crudely (or qualitatively), you can think of its square root as the distance 
(from @mymath{\overline{x}}) which contains a specific amount of the flux 
(depending on the @mymath{B_i} distribution).
+Similar to the first moment, the geometric second moment can be found by 
setting all @mymath{B_i=1}.
+So while the first moment quantified the position of the brightness 
distribution, the second moment quantifies how that brightness is dispersed 
about the first moment.
+In other words, it quantifies how ``sharp'' the object's image is.
 
 @cindex Floating point error
-Before continuing to two dimensions and the derivation of the elliptical
-parameters, let's pause for an important implementation technicality. You
-can ignore this paragraph and the next two if you don't want to implement
-these concepts. The basic definition (first definition of
-@mymath{\overline{x^2}} above) can be used without any major
-problem. However, using this fraction requires two runs over the data: one
-run to find @mymath{\overline{x}} and another run to find
-@mymath{\overline{x^2}} from @mymath{\overline{x}}, this can be slow. The
-advantage of the last fraction above, is that we can estimate both the
-first and second moments in one run (since the @mymath{-\overline{x}^2}
-term can easily be added later).
-
-The logarithmic nature of floating point number digitization creates a
-complication however: suppose the object is located between pixels 10000
-and 10020. Hence the target's pixels are only distributed over 20 pixels
-(with a standard deviation @mymath{<20}), while the mean has a value of
-@mymath{\sim10000}. The @mymath{\sum_iB_i^2x_i^2} will go to very very
-large values while the individual pixel differences will be orders of
-magnitude smaller. This will lower the accuracy of our calculation due to
-the limited accuracy of floating point operations. The variance only
-depends on the distance of each point from the mean, so we can shift all
-position by a constant/arbitrary @mymath{K} which is much closer to the
-mean: @mymath{\overline{x-K}=\overline{x}-K}. Hence we can calculate the
-second order moment using:
+Before continuing to two dimensions and the derivation of the elliptical 
parameters, let's pause for an important implementation technicality.
+You can ignore this paragraph and the next two if you don't want to implement 
these concepts.
+The basic definition (first definition of @mymath{\overline{x^2}} above) can 
be used without any major problem.
+However, using this fraction requires two runs over the data: one run to find 
@mymath{\overline{x}} and another run to find @mymath{\overline{x^2}} from 
@mymath{\overline{x}}, this can be slow.
+The advantage of the last fraction above, is that we can estimate both the 
first and second moments in one run (since the @mymath{-\overline{x}^2} term 
can easily be added later).
+
+The logarithmic nature of floating point number digitization creates a 
complication however: suppose the object is located between pixels 10000 and 
10020.
+Hence the target's pixels are only distributed over 20 pixels (with a standard 
deviation @mymath{<20}), while the mean has a value of @mymath{\sim10000}.
+The @mymath{\sum_iB_i^2x_i^2} will go to very very large values while the 
individual pixel differences will be orders of magnitude smaller.
+This will lower the accuracy of our calculation due to the limited accuracy of 
floating point operations.
+The variance only depends on the distance of each point from the mean, so we 
can shift all position by a constant/arbitrary @mymath{K} which is much closer 
to the mean: @mymath{\overline{x-K}=\overline{x}-K}.
+Hence we can calculate the second order moment using:
 
 @dispmath{ \overline{x^2}={\sum_iB_i(x_i-K)^2 \over \sum_iB_i} -
            (\overline{x}-K)^2 }
 
 @noindent
-The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of
-squares will involve smaller numbers), as long as @mymath{K} is within the
-object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the
-floating point error induced in our calculation will be negligible. For the
-most simplest implementation, MakeCatalog takes @mymath{K} to be the
-smallest position of the object in each dimension. Since @mymath{K} is
-arbitrary and an implementation/technical detail, we will ignore it for the
-remainder of this discussion.
+The closer @mymath{K} is to @mymath{\overline{x}}, the better (the sums of 
squares will involve smaller numbers), as long as @mymath{K} is within the 
object limits (in the example above: @mymath{10000\leq{K}\leq10020}), the 
floating point error induced in our calculation will be negligible.
+For the most simplest implementation, MakeCatalog takes @mymath{K} to be the 
smallest position of the object in each dimension.
+Since @mymath{K} is arbitrary and an implementation/technical detail, we will 
ignore it for the remainder of this discussion.
 
 In two dimensions, the mean and variances can be written as:
 
@@ -15400,20 +14257,11 @@ In two dimensions, the mean and variances can be 
written as:
           \overline{xy}={\sum_iB_ix_iy_i \over \sum_iB_i} -
           \overline{x}\times\overline{y}}
 
-If an elliptical profile's major axis exactly lies along the @mymath{x}
-axis, then @mymath{\overline{x^2}} will be directly proportional with the
-profile's major axis, @mymath{\overline{y^2}} with its minor axis and
-@mymath{\overline{xy}=0}. However, in reality we are not that lucky and
-(assuming galaxies can be parameterized as an ellipse) the major axis of
-galaxies can be in any direction on the image (in fact this is one of the
-core principles behind weak-lensing by shear estimation). So the purpose of
-the remainder of this section is to define a strategy to measure the
-position angle and axis ratio of some randomly positioned ellipses in an
-image, using the raw second moments that we have calculated above in our
-image coordinates.
-
-Let's assume we have rotated the galaxy by @mymath{\theta}, the new second
-order moments are:
+If an elliptical profile's major axis exactly lies along the @mymath{x} axis, 
then @mymath{\overline{x^2}} will be directly proportional with the profile's 
major axis, @mymath{\overline{y^2}} with its minor axis and 
@mymath{\overline{xy}=0}.
+However, in reality we are not that lucky and (assuming galaxies can be 
parameterized as an ellipse) the major axis of galaxies can be in any direction 
on the image (in fact this is one of the core principles behind weak-lensing by 
shear estimation).
+So the purpose of the remainder of this section is to define a strategy to 
measure the position angle and axis ratio of some randomly positioned ellipses 
in an image, using the raw second moments that we have calculated above in our 
image coordinates.
+
+Let's assume we have rotated the galaxy by @mymath{\theta}, the new second 
order moments are:
 
 @dispmath{\overline{x_\theta^2} = \overline{x^2}\cos^2\theta +
            \overline{y^2}\sin^2\theta -
@@ -15426,8 +14274,7 @@ order moments are:
            \overline{xy}(\cos^2\theta-\sin^2\theta)}
 
 @noindent
-The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies
-along the @mymath{x_\theta} axis) can be found by:
+The best @mymath{\theta} (@mymath{\theta_0}, where major axis lies along the 
@mymath{x_\theta} axis) can be found by:
 
 @dispmath{\left.{\partial \overline{x_\theta^2} \over \partial 
\theta}\right|_{\theta_0}=0}
 Taking the derivative, we get:
@@ -15439,18 +14286,10 @@ Taking the derivative, we get:
 
 @cindex Position angle
 @noindent
-MakeCatalog uses the standard C math library's @code{atan2} function
-to estimate @mymath{\theta_0}, which we define as the position angle
-of the ellipse. To recall, this is the angle of the major axis of the
-ellipse with the @mymath{x} axis. By definition, when the elliptical
-profile is rotated by @mymath{\theta_0}, then
-@mymath{\overline{xy_{\theta_0}}=0},
-@mymath{\overline{x_{\theta_0}^2}} will be the extent of the maximum
-variance and @mymath{\overline{y_{\theta_0}^2}} the extent of the
-minimum variance (which are perpendicular for an ellipse). Replacing
-@mymath{\theta_0} in the equations above for
-@mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can
-get the semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:
+MakeCatalog uses the standard C math library's @code{atan2} function to 
estimate @mymath{\theta_0}, which we define as the position angle of the 
ellipse.
+To recall, this is the angle of the major axis of the ellipse with the 
@mymath{x} axis.
+By definition, when the elliptical profile is rotated by @mymath{\theta_0}, 
then @mymath{\overline{xy_{\theta_0}}=0}, @mymath{\overline{x_{\theta_0}^2}} 
will be the extent of the maximum variance and 
@mymath{\overline{y_{\theta_0}^2}} the extent of the minimum variance (which 
are perpendicular for an ellipse).
+Replacing @mymath{\theta_0} in the equations above for 
@mymath{\overline{x_\theta}} and @mymath{\overline{y_\theta}}, we can get the 
semi-major (@mymath{A}) and semi-minor (@mymath{B}) lengths:
 
 @dispmath{A^2\equiv\overline{x_{\theta_0}^2}= {\overline{x^2} +
 \overline{y^2} \over 2} + \sqrt{\left({\overline{x^2}-\overline{y^2} \over 
2}\right)^2 + \overline{xy}^2}}
@@ -15458,14 +14297,8 @@ get the semi-major (@mymath{A}) and semi-minor 
(@mymath{B}) lengths:
 @dispmath{B^2\equiv\overline{y_{\theta_0}^2}= {\overline{x^2} +
 \overline{y^2} \over 2} - \sqrt{\left({\overline{x^2}-\overline{y^2} \over 
2}\right)^2 + \overline{xy}^2}}
 
-As a summary, it is important to remember that the units of @mymath{A} and
-@mymath{B} are in pixels (the standard deviation of a positional
-distribution) and that they represent the spatial light distribution of the
-object in both image dimensions (rotated by @mymath{\theta_0}). When the
-object cannot be represented as an ellipse, this interpretation breaks
-down: @mymath{\overline{xy_{\theta_0}}\neq0} and
-@mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum
-variance.
+As a summary, it is important to remember that the units of @mymath{A} and 
@mymath{B} are in pixels (the standard deviation of a positional distribution) 
and that they represent the spatial light distribution of the object in both 
image dimensions (rotated by @mymath{\theta_0}).
+When the object cannot be represented as an ellipse, this interpretation 
breaks down: @mymath{\overline{xy_{\theta_0}}\neq0} and 
@mymath{\overline{y_{\theta_0}^2}} will not be the direction of minimum 
variance.
 
 
 
@@ -15474,91 +14307,58 @@ variance.
 @node Adding new columns to MakeCatalog, Invoking astmkcatalog, Measuring 
elliptical parameters, MakeCatalog
 @subsection Adding new columns to MakeCatalog
 
-MakeCatalog is designed to allow easy addition of different measurements
-over a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi
-[2016]}). A check-list style description of necessary steps to do that is
-described in this section. The common development characteristics of
-MakeCatalog and other Gnuastro programs is explained in
-@ref{Developing}. We strongly encourage you to have a look at that chapter
-to greatly simplify your navigation in the code. After adding and testing
-your column, you are most welcome (and encouraged) to share it with us so
-we can add to the next release of Gnuastro for everyone else to also
-benefit from your efforts.
-
-MakeCatalog will first pass over each label's pixels two times and do
-necessary raw/internal calculations. Once the passes are done, it will use
-the raw information for filling the final catalog's columns. In the first
-pass it will gather mainly object information and in the second run, it
-will mainly focus on the clumps, or any other measurement that needs an
-output from the first pass. These two passes are designed to be raw
-summations: no extra processing. This will allow parallel processing and
-simplicity/clarity. So if your new calculation, needs new raw information
-from the pixels, then you will need to also modify the respective
-@code{mkcatalog_first_pass} and @code{mkcatalog_second_pass} functions
-(both in @file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns
-in @file{main.h} (hopefully the comments in the code are clear enough).
-
-In all these different places, the final columns are sorted in the same
-order (same order as @ref{Invoking astmkcatalog}). This allows a particular
-column/option to be easily found in all steps. Therefore in adding your new
-option, be sure to keep it in the same relative place in the list in all
-the separate places (it doesn't necessarily have to be in the end), and
-near conceptually similar options.
+MakeCatalog is designed to allow easy addition of different measurements over 
a labeled image (see @url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}).
+A check-list style description of necessary steps to do that is described in 
this section.
+The common development characteristics of MakeCatalog and other Gnuastro 
programs is explained in @ref{Developing}.
+We strongly encourage you to have a look at that chapter to greatly simplify 
your navigation in the code.
+After adding and testing your column, you are most welcome (and encouraged) to 
share it with us so we can add to the next release of Gnuastro for everyone 
else to also benefit from your efforts.
+
+MakeCatalog will first pass over each label's pixels two times and do 
necessary raw/internal calculations.
+Once the passes are done, it will use the raw information for filling the 
final catalog's columns.
+In the first pass it will gather mainly object information and in the second 
run, it will mainly focus on the clumps, or any other measurement that needs an 
output from the first pass.
+These two passes are designed to be raw summations: no extra processing.
+This will allow parallel processing and simplicity/clarity.
+So if your new calculation, needs new raw information from the pixels, then 
you will need to also modify the respective @code{mkcatalog_first_pass} and 
@code{mkcatalog_second_pass} functions (both in 
@file{bin/mkcatalog/mkcatalog.c}) and define new raw table columns in 
@file{main.h} (hopefully the comments in the code are clear enough).
+
+In all these different places, the final columns are sorted in the same order 
(same order as @ref{Invoking astmkcatalog}).
+This allows a particular column/option to be easily found in all steps.
+Therefore in adding your new option, be sure to keep it in the same relative 
place in the list in all the separate places (it doesn't necessarily have to be 
in the end), and near conceptually similar options.
 
 @table @file
 
 @item main.h
-The @code{objectcols} and @code{clumpcols} enumerated variables
-(@code{enum}) define the raw/internal calculation columns. If your new
-column requires new raw calculations, add a row to the respective list. If
-your calculation requires any other settings parameters, you should add a
-variable to the @code{mkcatalogparams} structure.
+The @code{objectcols} and @code{clumpcols} enumerated variables (@code{enum}) 
define the raw/internal calculation columns.
+If your new column requires new raw calculations, add a row to the respective 
list.
+If your calculation requires any other settings parameters, you should add a 
variable to the @code{mkcatalogparams} structure.
 
 @item ui.c
-If the new column needs raw calculations (an entry was added in
-@code{objectcols} and @code{clumpcols}), specify which inputs it needs in
-@code{ui_necessary_inputs}, similar to the other options. Afterwards, if
-your column includes any particular settings (you needed to add a variable
-to the @code{mkcatalogparams} structure in @file{main.h}), you should do
-the sanity checks and preparations for it here.
+If the new column needs raw calculations (an entry was added in 
@code{objectcols} and @code{clumpcols}), specify which inputs it needs in 
@code{ui_necessary_inputs}, similar to the other options.
+Afterwards, if your column includes any particular settings (you needed to add 
a variable to the @code{mkcatalogparams} structure in @file{main.h}), you 
should do the sanity checks and preparations for it here.
 
 @item ui.h
-The @code{option_keys_enum} associates a unique value for each option to
-MakeCatalog. The options that have a short option version, the single
-character short comment is used for the value. Those that don't have a
-short option version, get a large integer automatically. You should add a
-variable here to identify your desired column.
+The @code{option_keys_enum} associates a unique value for each option to 
MakeCatalog.
+The options that have a short option version, the single character short 
comment is used for the value.
+Those that don't have a short option version, get a large integer 
automatically.
+You should add a variable here to identify your desired column.
 
 
 @cindex GNU C library
 @item args.h
-This file specifies all the parameters for the GNU C library, Argp
-structure that is in charge of reading the user's options. To define your
-new column, just copy an existing set of parameters and change the first,
-second and 5th values (the only ones that differ between all the columns),
-you should use the macro you defined in @file{ui.h} here.
+This file specifies all the parameters for the GNU C library, Argp structure 
that is in charge of reading the user's options.
+To define your new column, just copy an existing set of parameters and change 
the first, second and 5th values (the only ones that differ between all the 
columns), you should use the macro you defined in @file{ui.h} here.
 
 
 @item columns.c
-This file contains the main definition and high-level calculation of your
-new column through the @code{columns_define_alloc} and @code{columns_fill}
-functions. In the first, you specify the basic information about the
-column: its name, units, comments, type (see @ref{Numeric data types}) and
-how it should be printed if the output is a text file. You should also
-specify the raw/internal columns that are necessary for this column here as
-the many existing examples show. Through the types for objects and rows,
-you can specify if this column is only for clumps, objects or both.
-
-The second main function (@code{columns_fill}) writes the final value into
-the appropriate column for each object and clump. As you can see in the
-many existing examples, you can define your processing on the raw/internal
-calculations here and save them in the output.
+This file contains the main definition and high-level calculation of your new 
column through the @code{columns_define_alloc} and @code{columns_fill} 
functions.
+In the first, you specify the basic information about the column: its name, 
units, comments, type (see @ref{Numeric data types}) and how it should be 
printed if the output is a text file.
+You should also specify the raw/internal columns that are necessary for this 
column here as the many existing examples show.
+Through the types for objects and rows, you can specify if this column is only 
for clumps, objects or both.
+
+The second main function (@code{columns_fill}) writes the final value into the 
appropriate column for each object and clump.
+As you can see in the many existing examples, you can define your processing 
on the raw/internal calculations here and save them in the output.
 
 @item mkcatalog.c
-As described before, this file contains the two main MakeCatalog
-work-horses: @code{mkcatalog_first_pass} and @code{mkcatalog_second_pass},
-their names are descriptive enough and their internals are also clear and
-heavily commented.
+As described before, this file contains the two main MakeCatalog work-horses: 
@code{mkcatalog_first_pass} and @code{mkcatalog_second_pass}, their names are 
descriptive enough and their internals are also clear and heavily commented.
 
 @item doc/gnuastro.texi
 Update this manual and add a description for the new column.
@@ -15572,9 +14372,8 @@ Update this manual and add a description for the new 
column.
 @node Invoking astmkcatalog,  , Adding new columns to MakeCatalog, MakeCatalog
 @subsection Invoking MakeCatalog
 
-MakeCatalog will do measurements and produce a catalog from a labeled
-dataset and optional values dataset(s). The executable name is
-@file{astmkcatalog} with the following general template
+MakeCatalog will do measurements and produce a catalog from a labeled dataset 
and optional values dataset(s).
+The executable name is @file{astmkcatalog} with the following general template
 
 @example
 $ astmkcatalog [OPTION ...] InputImage.fits
@@ -15606,23 +14405,16 @@ $ astmkcatalog K_segmented.fits --hdu=DETECTIONS 
--clumpscat     \
 
 @cindex Gaussian
 @noindent
-If MakeCatalog is to do processing (not printing help or option values), an
-input labeled image should be provided. The options described in this
-section are those that are particular to MakeProfiles. For operations that
-MakeProfiles shares with other programs (mainly involving input/output or
-general processing steps), see @ref{Common options}. Also see @ref{Common
-program behavior} for some general characteristics of all Gnuastro programs
-including MakeCatalog.
-
-The various measurements/columns of MakeCatalog are requested as options,
-either on the command-line or in configuration files, see
-@ref{Configuration files}. The full list of available columns is available
-in @ref{MakeCatalog measurements}. Depending on the requested columns,
-MakeCatalog needs more than one input dataset, for more details, please see
-@ref{MakeCatalog inputs and basic settings}. The upper-limit measurements
-in particular need several configuration options which are thoroughly
-discussed in @ref{Upper-limit settings}. Finally, in @ref{MakeCatalog
-output} the output file(s) created by MakeCatalog are discussed.
+If MakeCatalog is to do processing (not printing help or option values), an 
input labeled image should be provided.
+The options described in this section are those that are particular to 
MakeProfiles.
+For operations that MakeProfiles shares with other programs (mainly involving 
input/output or general processing steps), see @ref{Common options}.
+Also see @ref{Common program behavior} for some general characteristics of all 
Gnuastro programs including MakeCatalog.
+
+The various measurements/columns of MakeCatalog are requested as options, 
either on the command-line or in configuration files, see @ref{Configuration 
files}.
+The full list of available columns is available in @ref{MakeCatalog 
measurements}.
+Depending on the requested columns, MakeCatalog needs more than one input 
dataset, for more details, please see @ref{MakeCatalog inputs and basic 
settings}.
+The upper-limit measurements in particular need several configuration options 
which are thoroughly discussed in @ref{Upper-limit settings}.
+Finally, in @ref{MakeCatalog output} the output file(s) created by MakeCatalog 
are discussed.
 
 @menu
 * MakeCatalog inputs and basic settings::  Input files and basic settings.
@@ -15634,149 +14426,93 @@ output} the output file(s) created by MakeCatalog 
are discussed.
 @node MakeCatalog inputs and basic settings, Upper-limit settings, Invoking 
astmkcatalog, Invoking astmkcatalog
 @subsubsection MakeCatalog inputs and basic settings
 
-MakeCatalog works by using a localized/labeled dataset (see
-@ref{MakeCatalog}). This dataset maps/labels pixels to a specific target
-(row number in the final catalog) and is thus the only necessary input
-dataset to produce a minimal catalog in any situation. Because it only has
-labels/counters, it must have an integer type (see @ref{Numeric data
-types}), see below if your labels are in a floating point container. When
-the requested measurements only need this dataset (for example
-@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't
-read any more datasets.
-
-Low-level measurements that only use the labeled image are rarely
-sufficient for any high-level science case. Therefore necessary input
-datasets depend on the requested columns in each run. For example, let's
-assume you want the brightness/magnitude and signal-to-noise ratio of your
-labeled regions. For these columns, you will also need to provide an extra
-dataset containing values for every pixel of the labeled input (to measure
-brightness) and another for the Sky standard deviation (to measure
-error). All such auxiliary input files have to have the same size (number
-of pixels in each dimension) as the input labeled image. Their numeric data
-type is irrelevant (they will be converted to 32-bit floating point
-internally). For the full list of available measurements, see
-@ref{MakeCatalog measurements}.
-
-The ``values'' dataset is used for measurements like brightness/magnitude,
-or flux-weighted positions. If it is a real image, by default it is assumed
-to be already Sky-subtracted prior to running MakeCatalog. If it isn't, you
-use the @option{--subtractsky} option to, so MakeCatalog reads and
-subtracts the Sky dataset before any processing. To obtain the Sky value,
-you can use the @option{--sky} option of @ref{Statistics}, but the best
-recommended method is @ref{NoiseChisel}, see @ref{Sky value}.
-
-MakeCatalog can also do measurements on sub-structures of detections. In
-other words, it can produce two catalogs. Following the nomenclature of
-Segment (see @ref{Segment}), the main labeled input dataset is known as
-``object'' labels and the (optional) sub-structure input dataset is known
-as ``clumps''. If MakeCatalog is run with the @option{--clumpscat} option,
-it will also need a labeled image containing clumps, similar to what
-Segment produces (see @ref{Segment output}). Since clumps are defined
-within detected regions (they exist over signal, not noise), MakeCatalog
-uses their boundaries to subtract the level of signal under them.
-
-There are separate options to explicitly request a file name and
-HDU/extension for each of the required input datasets as fully described
-below (with the @option{--*file} format). When each dataset is in a
-separate file, these options are necessary. However, one great advantage of
-the FITS file format (that is heavily used in astronomy) is that it allows
-the storage of multiple datasets in one file. So in most situations (for
-example if you are using the outputs of @ref{NoiseChisel} or
-@ref{Segment}), all the necessary input datasets can be in one file.
-
-When none of the @option{--*file} options are given, MakeCatalog will
-assume the necessary input datasets are in the file given as its argument
-(without any option). When the Sky or Sky standard deviation datasets are
-necessary and the only @option{--*file} option called is
-@option{--valuesfile}, MakeCatalog will search for these datasets (with the
-default/given HDUs) in the file given to @option{--valuesfile} (before
-looking into the main argument file).
-
-When the clumps image (necessary with the @option{--clumpscat} option) is
-used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword
-for the total number of clumps in the image (irrespective of how many
-objects there are). If its not present, it will count them and possibly
-re-label the clumps so the clump labels always start with 1 and finish with
-the total number of clumps in each object. The re-labeled clumps image will
-be stored with the @file{-clumps-relab.fits} suffix. This can slightly
-slow-down the run.
-
-Note that @code{NUMLABS} is automatically written by Segment in its
-outputs, so if you are feeding Segment's clump labels, you can benefit from
-the improved speed. Otherwise, if you are creating the clumps label dataset
-manually, it may be good to include the @code{NUMLABS} keyword in its
-header and also be sure that there is no gap in the clump labels. For
-example if an object has three clumps, they are labeled as 1, 2, 3. If they
-are labeled as 1, 3, 4, or any other combination of three positive integers
-that aren't an increment of the previous, you might get unknown behavior.
-
-It may happen that your labeled objects image was created with a program
-that only outputs floating point files. However, you know it only has
-integer valued pixels that are stored in a floating point container. In
-such cases, you can use Gnuastro's Arithmetic program (see
-@ref{Arithmetic}) to change the numerical data type of the image
-(@file{float.fits}) to an integer type image (@file{int.fits}) with a
-command like below:
+MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
+This dataset maps/labels pixels to a specific target (row number in the final 
catalog) and is thus the only necessary input dataset to produce a minimal 
catalog in any situation.
+Because it only has labels/counters, it must have an integer type (see 
@ref{Numeric data types}), see below if your labels are in a floating point 
container.
+When the requested measurements only need this dataset (for example 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't 
read any more datasets.
+
+Low-level measurements that only use the labeled image are rarely sufficient 
for any high-level science case.
+Therefore necessary input datasets depend on the requested columns in each run.
+For example, let's assume you want the brightness/magnitude and 
signal-to-noise ratio of your labeled regions.
+For these columns, you will also need to provide an extra dataset containing 
values for every pixel of the labeled input (to measure brightness) and another 
for the Sky standard deviation (to measure error).
+All such auxiliary input files have to have the same size (number of pixels in 
each dimension) as the input labeled image.
+Their numeric data type is irrelevant (they will be converted to 32-bit 
floating point internally).
+For the full list of available measurements, see @ref{MakeCatalog 
measurements}.
+
+The ``values'' dataset is used for measurements like brightness/magnitude, or 
flux-weighted positions.
+If it is a real image, by default it is assumed to be already Sky-subtracted 
prior to running MakeCatalog.
+If it isn't, you use the @option{--subtractsky} option to, so MakeCatalog 
reads and subtracts the Sky dataset before any processing.
+To obtain the Sky value, you can use the @option{--sky} option of 
@ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see 
@ref{Sky value}.
+
+MakeCatalog can also do measurements on sub-structures of detections.
+In other words, it can produce two catalogs.
+Following the nomenclature of Segment (see @ref{Segment}), the main labeled 
input dataset is known as ``object'' labels and the (optional) sub-structure 
input dataset is known as ``clumps''.
+If MakeCatalog is run with the @option{--clumpscat} option, it will also need 
a labeled image containing clumps, similar to what Segment produces (see 
@ref{Segment output}).
+Since clumps are defined within detected regions (they exist over signal, not 
noise), MakeCatalog uses their boundaries to subtract the level of signal under 
them.
+
+There are separate options to explicitly request a file name and HDU/extension 
for each of the required input datasets as fully described below (with the 
@option{--*file} format).
+When each dataset is in a separate file, these options are necessary.
+However, one great advantage of the FITS file format (that is heavily used in 
astronomy) is that it allows the storage of multiple datasets in one file.
+So in most situations (for example if you are using the outputs of 
@ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in 
one file.
+
+When none of the @option{--*file} options are given, MakeCatalog will assume 
the necessary input datasets are in the file given as its argument (without any 
option).
+When the Sky or Sky standard deviation datasets are necessary and the only 
@option{--*file} option called is @option{--valuesfile}, MakeCatalog will 
search for these datasets (with the default/given HDUs) in the file given to 
@option{--valuesfile} (before looking into the main argument file).
+
+When the clumps image (necessary with the @option{--clumpscat} option) is 
used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword for 
the total number of clumps in the image (irrespective of how many objects there 
are).
+If its not present, it will count them and possibly re-label the clumps so the 
clump labels always start with 1 and finish with the total number of clumps in 
each object.
+The re-labeled clumps image will be stored with the @file{-clumps-relab.fits} 
suffix.
+This can slightly slow-down the run.
+
+Note that @code{NUMLABS} is automatically written by Segment in its outputs, 
so if you are feeding Segment's clump labels, you can benefit from the improved 
speed.
+Otherwise, if you are creating the clumps label dataset manually, it may be 
good to include the @code{NUMLABS} keyword in its header and also be sure that 
there is no gap in the clump labels.
+For example if an object has three clumps, they are labeled as 1, 2, 3.
+If they are labeled as 1, 3, 4, or any other combination of three positive 
integers that aren't an increment of the previous, you might get unknown 
behavior.
+
+It may happen that your labeled objects image was created with a program that 
only outputs floating point files.
+However, you know it only has integer valued pixels that are stored in a 
floating point container.
+In such cases, you can use Gnuastro's Arithmetic program (see 
@ref{Arithmetic}) to change the numerical data type of the image 
(@file{float.fits}) to an integer type image (@file{int.fits}) with a command 
like below:
 
 @example
 @command{$ astarithmetic float.fits int32 --output=int.fits}
 @end example
 
-To summarize: if the input file to MakeCatalog is the default/full output
-of Segment (see @ref{Segment output}) you don't have to worry about any of
-the @option{--*file} options below. You can just give Segment's output file
-to MakeCatalog as described in @ref{Invoking astmkcatalog}. To feed
-NoiseChisel's output into MakeCatalog, just change the labeled dataset's
-header (with @option{--hdu=DETECTIONS}). The full list of input dataset
-options and general setting options are described below.
+To summarize: if the input file to MakeCatalog is the default/full output of 
Segment (see @ref{Segment output}) you don't have to worry about any of the 
@option{--*file} options below.
+You can just give Segment's output file to MakeCatalog as described in 
@ref{Invoking astmkcatalog}.
+To feed NoiseChisel's output into MakeCatalog, just change the labeled 
dataset's header (with @option{--hdu=DETECTIONS}).
+The full list of input dataset options and general setting options are 
described below.
 
 @table @option
 
 @item -l STR
 @itemx --clumpsfile=STR
-The file containing the labeled clumps dataset when @option{--clumpscat} is
-called (see @ref{MakeCatalog output}). When @option{--clumpscat} is called,
-but this option isn't, MakeCatalog will look into the main input file
-(given as an argument) for the required extension/HDU (value to
-@option{--clumpshdu}).
+The file containing the labeled clumps dataset when @option{--clumpscat} is 
called (see @ref{MakeCatalog output}).
+When @option{--clumpscat} is called, but this option isn't, MakeCatalog will 
look into the main input file (given as an argument) for the required 
extension/HDU (value to @option{--clumpshdu}).
 
 @item --clumpshdu=STR
-The HDU/extension of the clump labels dataset. Only pixels with values
-above zero will be considered. The clump labels dataset has to be an
-integer data type (see @ref{Numeric data types}) and only pixels with a
-value larger than zero will be used. See @ref{Segment output} for a
-description of the expected format.
+The HDU/extension of the clump labels dataset.
+Only pixels with values above zero will be considered.
+The clump labels dataset has to be an integer data type (see @ref{Numeric data 
types}) and only pixels with a value larger than zero will be used.
+See @ref{Segment output} for a description of the expected format.
 
 @item -v STR
 @itemx --valuesfile=STR
-The file name of the (sky-subtracted) values dataset. When any of the
-columns need values to associate with the input labels (for example to
-measure the brightness/magnitude of a galaxy), MakeCatalog will look into a
-``values'' for the respective pixel values. In most common processing, this
-is the actual astronomical image that the labels were defined, or detected,
-over. The HDU/extension of this dataset in the given file can be specified
-with @option{--valueshdu}. If this option is not called, MakeCatalog will
-look for the given extension in the main input file.
+The file name of the (sky-subtracted) values dataset.
+When any of the columns need values to associate with the input labels (for 
example to measure the brightness/magnitude of a galaxy), MakeCatalog will look 
into a ``values'' for the respective pixel values.
+In most common processing, this is the actual astronomical image that the 
labels were defined, or detected, over.
+The HDU/extension of this dataset in the given file can be specified with 
@option{--valueshdu}.
+If this option is not called, MakeCatalog will look for the given extension in 
the main input file.
 
 @item --valueshdu=STR/INT
-The name or number (counting from zero) of the extension containing the
-``values'' dataset, see the descriptions above and those in
-@option{--valuesfile} for more.
+The name or number (counting from zero) of the extension containing the 
``values'' dataset, see the descriptions above and those in 
@option{--valuesfile} for more.
 
 @item -s STR/FLT
 @itemx --insky=STR/FLT
-Sky value as a single number, or the file name containing a dataset
-(different values per pixel or tile). The Sky dataset is only necessary
-when @option{--subtractsky} is called or when a column directly related to
-the Sky value is requested (currently @option{--sky}). This dataset may be
-a tessellation, with one element per tile (see @option{--oneelempertile} of
-NoiseChisel's @ref{Processing options}).
-
-When the Sky dataset is necessary but this option is not called,
-MakeCatalog will assume it is an HDU/extension (specified by
-@option{--skyhdu}) in one of the already given files. First it will look
-for it in the @option{--valuesfile} (if it is given) and then the main
-input file (given as an argument).
+Sky value as a single number, or the file name containing a dataset (different 
values per pixel or tile).
+The Sky dataset is only necessary when @option{--subtractsky} is called or 
when a column directly related to the Sky value is requested (currently 
@option{--sky}).
+This dataset may be a tessellation, with one element per tile (see 
@option{--oneelempertile} of NoiseChisel's @ref{Processing options}).
+
+When the Sky dataset is necessary but this option is not called, MakeCatalog 
will assume it is an HDU/extension (specified by @option{--skyhdu}) in one of 
the already given files.
+First it will look for it in the @option{--valuesfile} (if it is given) and 
then the main input file (given as an argument).
 
 By default the values dataset is assumed to be already Sky subtracted, so
 this dataset is not necessary for many of the columns.
@@ -15790,32 +14526,23 @@ processing.
 
 @item -t STR/FLT
 @itemx --instd=STR/FLT
-Sky standard deviation value as a single number, or the file name
-containing a dataset (different values per pixel or tile). With the
-@option{--variance} option you can tell MakeCatalog to interpret this
-value/dataset as a variance image, not standard deviation.
-
-@strong{Important note:} This must only be the SKY standard deviation or
-variance (not including the signal's contribution to the error). In other
-words, the final standard deviation of a pixel depends on how much signal
-there is in it. MakeCatalog will find the amount of signal within each
-pixel (while subtracting the Sky, if @option{--subtractsky} is called) and
-account for the extra error due to it's value (signal). Therefore if the
-input standard deviation (or variance) image also contains the contribution
-of signal to the error, then the final error measurements will be
-over-estimated.
+Sky standard deviation value as a single number, or the file name containing a 
dataset (different values per pixel or tile).
+With the @option{--variance} option you can tell MakeCatalog to interpret this 
value/dataset as a variance image, not standard deviation.
+
+@strong{Important note:} This must only be the SKY standard deviation or 
variance (not including the signal's contribution to the error).
+In other words, the final standard deviation of a pixel depends on how much 
signal there is in it.
+MakeCatalog will find the amount of signal within each pixel (while 
subtracting the Sky, if @option{--subtractsky} is called) and account for the 
extra error due to it's value (signal).
+Therefore if the input standard deviation (or variance) image also contains 
the contribution of signal to the error, then the final error measurements will 
be over-estimated.
 
 @item --stdhdu=STR
 The HDU of the Sky value standard deviation image.
 
 @item --variance
-The dataset given to @option{--stdfile} (and @option{--stdhdu} has the Sky
-variance of every pixel, not the Sky standard deviation.
+The dataset given to @option{--stdfile} (and @option{--stdhdu} has the Sky 
variance of every pixel, not the Sky standard deviation.
 
 @item -z FLT
 @itemx --zeropoint=FLT
-The zero point magnitude for the input image, see @ref{Flux Brightness and
-magnitude}.
+The zero point magnitude for the input image, see @ref{Flux Brightness and 
magnitude}.
 
 @end table
 
@@ -15823,139 +14550,86 @@ magnitude}.
 @node Upper-limit settings, MakeCatalog measurements, MakeCatalog inputs and 
basic settings, Invoking astmkcatalog
 @subsubsection Upper-limit settings
 
-The upper-limit magnitude was discussed in @ref{Quantifying measurement
-limits}. Unlike other measured values/columns in MakeCatalog, the upper
-limit magnitude needs several extra parameters which are discussed
-here. All the options specific to the upper-limit measurements start with
-@option{up} for ``upper-limit''. The only exception is @option{--envseed}
-that is also present in other programs and is general for any job requiring
-random number generation in Gnuastro (see @ref{Generating random numbers}).
+The upper-limit magnitude was discussed in @ref{Quantifying measurement 
limits}.
+Unlike other measured values/columns in MakeCatalog, the upper limit magnitude 
needs several extra parameters which are discussed here.
+All the options specific to the upper-limit measurements start with 
@option{up} for ``upper-limit''.
+The only exception is @option{--envseed} that is also present in other 
programs and is general for any job requiring random number generation in 
Gnuastro (see @ref{Generating random numbers}).
 
 @cindex Reproducibility
-One very important consideration in Gnuastro is reproducibility. Therefore,
-the values to all of these parameters along with others (like the random
-number generator type and seed) are also reported in the comments of the
-final catalog when the upper limit magnitude column is desired. The random
-seed that is used to define the random positions for each object or clump
-is unique and set based on the (optionally) given seed, the total number of
-objects and clumps and also the labels of the clumps and objects. So with
-identical inputs, an identical upper-limit magnitude will be
-found. However, even if the seed is identical, when the ordering of the
-object/clump labels differs between different runs, the result of
-upper-limit measurements will not be identical.
-
-MakeCatalog will randomly place the object/clump footprint over the
-dataset. When the randomly placed footprint doesn't fall on any object or
-masked region (see @option{--upmaskfile}) it will be used in the final
-distribution. Otherwise that particular random position will be ignored and
-another random position will be generated. Finally, when the distribution
-has the desired number of successfully measured random samples
-(@option{--upnum}) the distribution's properties will be measured and
-placed in the catalog.
-
-When the profile is very large or the image is significantly covered by
-detections, it might not be possible to find the desired number of
-samplings in a reasonable time. MakeProfiles will continue searching until
-it is unable to find a successful position (since the last successful
-measurement@footnote{The counting of failed positions restarts on every
-successful measurement.}), for a large multiple of @option{--upnum}
-(currently@footnote{In Gnuastro's source, this constant number is defined
-as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in
-@file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is
-10). If @option{--upnum} successful samples cannot be found until this
-limit is reached, MakeCatalog will set the upper-limit magnitude for that
-object to NaN (blank).
-
-MakeCatalog will also print a warning if the range of positions available
-for the labeled region is smaller than double the size of the region. In
-such cases, the limited range of random positions can artificially decrease
-the standard deviation of the final distribution. If your dataset can allow
-it (it is large enough), it is recommended to use a larger range if you see
-such warnings.
+One very important consideration in Gnuastro is reproducibility.
+Therefore, the values to all of these parameters along with others (like the 
random number generator type and seed) are also reported in the comments of the 
final catalog when the upper limit magnitude column is desired.
+The random seed that is used to define the random positions for each object or 
clump is unique and set based on the (optionally) given seed, the total number 
of objects and clumps and also the labels of the clumps and objects.
+So with identical inputs, an identical upper-limit magnitude will be found.
+However, even if the seed is identical, when the ordering of the object/clump 
labels differs between different runs, the result of upper-limit measurements 
will not be identical.
+
+MakeCatalog will randomly place the object/clump footprint over the dataset.
+When the randomly placed footprint doesn't fall on any object or masked region 
(see @option{--upmaskfile}) it will be used in the final distribution.
+Otherwise that particular random position will be ignored and another random 
position will be generated.
+Finally, when the distribution has the desired number of successfully measured 
random samples (@option{--upnum}) the distribution's properties will be 
measured and placed in the catalog.
+
+When the profile is very large or the image is significantly covered by 
detections, it might not be possible to find the desired number of samplings in 
a reasonable time.
+MakeProfiles will continue searching until it is unable to find a successful 
position (since the last successful measurement@footnote{The counting of failed 
positions restarts on every successful measurement.}), for a large multiple of 
@option{--upnum} (currently@footnote{In Gnuastro's source, this constant number 
is defined as the @code{MKCATALOG_UPPERLIMIT_MAXFAILS_MULTIP} macro in 
@file{bin/mkcatalog/main.h}, see @ref{Downloading the source}.} this is 10).
+If @option{--upnum} successful samples cannot be found until this limit is 
reached, MakeCatalog will set the upper-limit magnitude for that object to NaN 
(blank).
+
+MakeCatalog will also print a warning if the range of positions available for 
the labeled region is smaller than double the size of the region.
+In such cases, the limited range of random positions can artificially decrease 
the standard deviation of the final distribution.
+If your dataset can allow it (it is large enough), it is recommended to use a 
larger range if you see such warnings.
 
 @table @option
 
 @item --upmaskfile=STR
-File name of mask image to use for upper-limit calculation. In some cases
-(especially when doing matched photometry), the object labels specified in
-the main input and mask image might not be adequate. In other words they do
-not necessarily have to cover @emph{all} detected objects: the user might
-have selected only a few of the objects in their labeled image. This option
-can be used to ignore regions in the image in these situations when
-estimating the upper-limit magnitude. All the non-zero pixels of the image
-specified by this option (in the @option{--upmaskhdu} extension) will be
-ignored in the upper-limit magnitude measurements.
-
-For example, when you are using labels from another image, you can give
-NoiseChisel's objects image output for this image as the value to this
-option. In this way, you can be sure that regions with data do not harm
-your distribution. See @ref{Quantifying measurement limits} for more on the
-upper limit magnitude.
+File name of mask image to use for upper-limit calculation.
+In some cases (especially when doing matched photometry), the object labels 
specified in the main input and mask image might not be adequate.
+In other words they do not necessarily have to cover @emph{all} detected 
objects: the user might have selected only a few of the objects in their 
labeled image.
+This option can be used to ignore regions in the image in these situations 
when estimating the upper-limit magnitude.
+All the non-zero pixels of the image specified by this option (in the 
@option{--upmaskhdu} extension) will be ignored in the upper-limit magnitude 
measurements.
+
+For example, when you are using labels from another image, you can give 
NoiseChisel's objects image output for this image as the value to this option.
+In this way, you can be sure that regions with data do not harm your 
distribution.
+See @ref{Quantifying measurement limits} for more on the upper limit magnitude.
 
 @item --upmaskhdu=STR
 The extension in the file specified by @option{--upmask}.
 
 @item --upnum=INT
-The number of random samples to take for all the objects. A larger value to
-this option will give a more accurate result (asymptotically), but it will
-also slow down the process. When a randomly positioned sample overlaps with
-a detected/masked pixel it is not counted and another random position is
-found until the object completely lies over an undetected region. So you
-can be sure that for each object, this many samples over undetected objects
-are made. See the upper limit magnitude discussion in @ref{Quantifying
-measurement limits} for more.
+The number of random samples to take for all the objects.
+A larger value to this option will give a more accurate result 
(asymptotically), but it will also slow down the process.
+When a randomly positioned sample overlaps with a detected/masked pixel it is 
not counted and another random position is found until the object completely 
lies over an undetected region.
+So you can be sure that for each object, this many samples over undetected 
objects are made.
+See the upper limit magnitude discussion in @ref{Quantifying measurement 
limits} for more.
 
 @item --uprange=INT,INT
-The range/width of the region (in pixels) to do random sampling along each
-dimension of the input image around each object's position. This is not a
-mandatory option and if not given (or given a value of zero in a
-dimension), the full possible range of the dataset along that dimension
-will be used. This is useful when the noise properties of the dataset vary
-gradually. In such cases, using the full range of the input dataset is
-going to bias the result. However, note that decreasing the range of
-available positions too much will also artificially decrease the standard
-deviation of the final distribution (and thus bias the upper-limit
-measurement).
+The range/width of the region (in pixels) to do random sampling along each 
dimension of the input image around each object's position.
+This is not a mandatory option and if not given (or given a value of zero in a 
dimension), the full possible range of the dataset along that dimension will be 
used.
+This is useful when the noise properties of the dataset vary gradually.
+In such cases, using the full range of the input dataset is going to bias the 
result.
+However, note that decreasing the range of available positions too much will 
also artificially decrease the standard deviation of the final distribution 
(and thus bias the upper-limit measurement).
 
 @item --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Read the random number generator type and seed value from the environment
-(see @ref{Generating random numbers}). Random numbers are used in
-calculating the random positions of different samples of each object.
+Read the random number generator type and seed value from the environment (see 
@ref{Generating random numbers}).
+Random numbers are used in calculating the random positions of different 
samples of each object.
 
 @item --upsigmaclip=FLT,FLT
-The raw distribution of random values will not be used to find the
-upper-limit magnitude, it will first be @mymath{\sigma}-clipped (see
-@ref{Sigma clipping}) to avoid outliers in the distribution (mainly the
-faint undetected wings of bright/large objects in the image). This option
-takes two values: the first is the multiple of @mymath{\sigma}, and the
-second is the termination criteria. If the latter is larger than 1, it is
-read as an integer number and will be the number of times to clip. If it is
-smaller than 1, it is interpreted as the tolerance level to stop
-clipping. See @ref{Sigma clipping} for a complete explanation.
+The raw distribution of random values will not be used to find the upper-limit 
magnitude, it will first be @mymath{\sigma}-clipped (see @ref{Sigma clipping}) 
to avoid outliers in the distribution (mainly the faint undetected wings of 
bright/large objects in the image).
+This option takes two values: the first is the multiple of @mymath{\sigma}, 
and the second is the termination criteria.
+If the latter is larger than 1, it is read as an integer number and will be 
the number of times to clip.
+If it is smaller than 1, it is interpreted as the tolerance level to stop 
clipping. See @ref{Sigma clipping} for a complete explanation.
 
 @item --upnsigma=FLT
-The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or
-@mymath{\sigma}) used to measure the upper-limit brightness or
-magnitude.
+The multiple of the final (@mymath{\sigma}-clipped) standard deviation (or 
@mymath{\sigma}) used to measure the upper-limit brightness or magnitude.
 
 @item --checkuplim=INT[,INT]
-Print a table of positions and measured values for all the full random
-distribution used for one particular object or clump. If only one integer
-is given to this option, it is interpreted to be an object's label. If two
-values are given, the first is the object label and the second is the ID of
-requested clump within it.
-
-The output is a table with three columns (its type is determined with the
-@option{--tableformat} option, see @ref{Input output options}). The first
-two columns are the position of the first pixel in each random sampling of
-this particular object/clump. The the third column is the measured flux
-over that region. If the region overlapped with a detection or masked
-pixel, then its measured value will be a NaN (not-a-number). The total
-number of rows is thus unknown, but you can be sure that the number of rows
-with non-NaN measurements is the number given to the @option{--upnum}
-option.
+Print a table of positions and measured values for all the full random 
distribution used for one particular object or clump.
+If only one integer is given to this option, it is interpreted to be an 
object's label.
+If two values are given, the first is the object label and the second is the 
ID of requested clump within it.
+
+The output is a table with three columns (its type is determined with the 
@option{--tableformat} option, see @ref{Input output options}).
+The first two columns are the position of the first pixel in each random 
sampling of this particular object/clump.
+The the third column is the measured flux over that region.
+If the region overlapped with a detection or masked pixel, then its measured 
value will be a NaN (not-a-number).
+The total number of rows is thus unknown, but you can be sure that the number 
of rows with non-NaN measurements is the number given to the @option{--upnum} 
option.
 
 @end table
 
@@ -15963,36 +14637,23 @@ option.
 @node MakeCatalog measurements, MakeCatalog output, Upper-limit settings, 
Invoking astmkcatalog
 @subsubsection MakeCatalog measurements
 
-The final group of options particular to MakeCatalog are those that specify
-which measurements/columns should be written into the final output
-table. The current measurements in MakeCatalog are those which only produce
-one final value for each label (for example its total brightness: a single
-number). All the different label's measurements can be written as one
-column in a final table/catalog that contains other columns for other
-similar single-number measurements.
-
-Command-line options are used to identify which measurements you want in
-the final catalog(s) and in what order. If any of the options below is
-called on the command line or in any of the configuration files, it will be
-included as a column in the output catalog. The order of the columns is in
-the same order as the options were seen by MakeCatalog (see
-@ref{Configuration file precedence}). Some of the columns apply to both
-``objects'' and ``clumps'' and some are particular to only one of them (for
-the definition of ``objects'' and ``clumps'', see
-@ref{Segment}). Columns/options that are unique to one catalog (only
-objects, or only clumps), are explicitly marked with [Objects] or [Clumps]
-to specify the catalog they will be placed in.
+The final group of options particular to MakeCatalog are those that specify 
which measurements/columns should be written into the final output table.
+The current measurements in MakeCatalog are those which only produce one final 
value for each label (for example its total brightness: a single number).
+All the different label's measurements can be written as one column in a final 
table/catalog that contains other columns for other similar single-number 
measurements.
+
+Command-line options are used to identify which measurements you want in the 
final catalog(s) and in what order.
+If any of the options below is called on the command line or in any of the 
configuration files, it will be included as a column in the output catalog.
+The order of the columns is in the same order as the options were seen by 
MakeCatalog (see @ref{Configuration file precedence}).
+Some of the columns apply to both ``objects'' and ``clumps'' and some are 
particular to only one of them (for the definition of ``objects'' and 
``clumps'', see @ref{Segment}).
+Columns/options that are unique to one catalog (only objects, or only clumps), 
are explicitly marked with [Objects] or [Clumps] to specify the catalog they 
will be placed in.
 
 @table @option
 
 @item --i
 @itemx --ids
-This is a unique option which can add multiple columns to the final
-catalog(s). Calling this option will put the object IDs (@option{--objid})
-in the objects catalog and host-object-ID (@option{--hostobjid}) and
-ID-in-host-object (@option{--idinhostobj}) into the clumps catalog. Hence
-if only object catalogs are required, it has the same effect as
-@option{--objid}.
+This is a unique option which can add multiple columns to the final catalog(s).
+Calling this option will put the object IDs (@option{--objid}) in the objects 
catalog and host-object-ID (@option{--hostobjid}) and ID-in-host-object 
(@option{--idinhostobj}) into the clumps catalog.
+Hence if only object catalogs are required, it has the same effect as 
@option{--objid}.
 
 @item --objid
 [Objects] ID of this object.
@@ -16006,28 +14667,22 @@ if only object catalogs are required, it has the same 
effect as
 
 @item -x
 @itemx --x
-The flux weighted center of all objects and clumps along the first FITS
-axis (horizontal when viewed in SAO ds9), see @mymath{\overline{x}} in
-@ref{Measuring elliptical parameters}. The weight has to have a positive
-value (pixel value larger than the Sky value) to be meaningful! Specially
-when doing matched photometry, this might not happen: no pixel value might
-be above the Sky value. For such detections, the geometric center will be
-reported in this column (see @option{--geox}). You can use
-@option{--weightarea} to see which was used.
+The flux weighted center of all objects and clumps along the first FITS axis 
(horizontal when viewed in SAO ds9), see @mymath{\overline{x}} in 
@ref{Measuring elliptical parameters}.
+The weight has to have a positive value (pixel value larger than the Sky 
value) to be meaningful! Specially when doing matched photometry, this might 
not happen: no pixel value might be above the Sky value.
+For such detections, the geometric center will be reported in this column (see 
@option{--geox}).
+You can use @option{--weightarea} to see which was used.
 
 @item -y
 @itemx --y
-The flux weighted center of all objects and clumps along the second FITS
-axis (vertical when viewed in SAO ds9). See @option{--x}.
+The flux weighted center of all objects and clumps along the second FITS axis 
(vertical when viewed in SAO ds9).
+See @option{--x}.
 
 @item --geox
-The geometric center of all objects and clumps along the first FITS
-axis axis. The geometric center is the average pixel positions
-irrespective of their pixel values.
+The geometric center of all objects and clumps along the first FITS axis axis.
+The geometric center is the average pixel positions irrespective of their 
pixel values.
 
 @item --geoy
-The geometric center of all objects and clumps along the second FITS
-axis axis, see @option{--geox}.
+The geometric center of all objects and clumps along the second FITS axis 
axis, see @option{--geox}.
 
 @item --minx
 The minimum position of all objects and clumps along the first FITS axis.
@@ -16042,123 +14697,97 @@ The minimum position of all objects and clumps along 
the second FITS axis.
 The maximum position of all objects and clumps along the second FITS axis.
 
 @item --clumpsx
-[Objects] The flux weighted center of all the clumps in this
-object along the first FITS axis. See @option{--x}.
+[Objects] The flux weighted center of all the clumps in this object along the 
first FITS axis.
+See @option{--x}.
 
 @item --clumpsy
-[Objects] The flux weighted center of all the clumps in this
-object along the second FITS axis. See @option{--x}.
+[Objects] The flux weighted center of all the clumps in this object along the 
second FITS axis.
+See @option{--x}.
 
 @item --clumpsgeox
-[Objects] The geometric center of all the clumps in this object along
-the first FITS axis. See @option{--geox}.
+[Objects] The geometric center of all the clumps in this object along the 
first FITS axis.
+See @option{--geox}.
 
 @item --clumpsgeoy
-[Objects] The geometric center of all the clumps in this object along
-the second FITS axis. See @option{--geox}.
+[Objects] The geometric center of all the clumps in this object along the 
second FITS axis.
+See @option{--geox}.
 
 @item -r
 @itemx --ra
-Flux weighted right ascension of all objects or clumps, see
-@option{--x}. This is just an alias for one of the lower-level
-@option{--w1} or @option{--w2} options. Using the FITS WCS keywords
-(@code{CTYPE}), MakeCatalog will determine which axis corresponds to the
-right ascension. If no @code{CTYPE} keywords start with @code{RA}, an error
-will be printed when requesting this column and MakeCatalog will abort.
+Flux weighted right ascension of all objects or clumps, see @option{--x}.
+This is just an alias for one of the lower-level @option{--w1} or 
@option{--w2} options.
+Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which 
axis corresponds to the right ascension.
+If no @code{CTYPE} keywords start with @code{RA}, an error will be printed 
when requesting this column and MakeCatalog will abort.
 
 @item -d
 @itemx --dec
-Flux weighted declination of all objects or clumps, see @option{--x}. This
-is just an alias for one of the lower-level @option{--w1} or @option{--w2}
-options. Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will
-determine which axis corresponds to the declination. If no @code{CTYPE}
-keywords start with @code{DEC}, an error will be printed when requesting
-this column and MakeCatalog will abort.
+Flux weighted declination of all objects or clumps, see @option{--x}.
+This is just an alias for one of the lower-level @option{--w1} or 
@option{--w2} options.
+Using the FITS WCS keywords (@code{CTYPE}), MakeCatalog will determine which 
axis corresponds to the declination.
+If no @code{CTYPE} keywords start with @code{DEC}, an error will be printed 
when requesting this column and MakeCatalog will abort.
 
 @item --w1
-Flux weighted first WCS axis of all objects or clumps, see
-@option{--x}. The first WCS axis is commonly used as right ascension in
-images.
+Flux weighted first WCS axis of all objects or clumps, see @option{--x}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --w2
-Flux weighted second WCS axis of all objects or clumps, see
-@option{--x}. The second WCS axis is commonly used as declination in
-images.
+Flux weighted second WCS axis of all objects or clumps, see @option{--x}.
+The second WCS axis is commonly used as declination in images.
 
 @item --geow1
-Geometric center in first WCS axis of all objects or clumps, see
-@option{--geox}. The first WCS axis is commonly used as right ascension in
-images.
+Geometric center in first WCS axis of all objects or clumps, see 
@option{--geox}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --geow2
-Geometric center in second WCS axis of all objects or clumps, see
-@option{--geox}. The second WCS axis is commonly used as declination in
-images.
+Geometric center in second WCS axis of all objects or clumps, see 
@option{--geox}.
+The second WCS axis is commonly used as declination in images.
 
 @item --clumpsw1
-[Objects] Flux weighted center in first WCS axis of all clumps in this
-object, see @option{--x}. The first WCS axis is commonly used as right
-ascension in images.
+[Objects] Flux weighted center in first WCS axis of all clumps in this object, 
see @option{--x}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --clumpsw2
-[Objects] Flux weighted declination of all clumps in this object, see
-@option{--x}. The second WCS axis is commonly used as declination in
-images.
+[Objects] Flux weighted declination of all clumps in this object, see 
@option{--x}.
+The second WCS axis is commonly used as declination in images.
 
 @item --clumpsgeow1
-[Objects] Geometric center right ascension of all clumps in this object,
-see @option{--geox}. The first WCS axis is commonly used as right ascension
-in images.
+[Objects] Geometric center right ascension of all clumps in this object, see 
@option{--geox}.
+The first WCS axis is commonly used as right ascension in images.
 
 @item --clumpsgeow2
-[Objects] Geometric center declination of all clumps in this object, see
-@option{--geox}. The second WCS axis is commonly used as declination in
-images.
+[Objects] Geometric center declination of all clumps in this object, see 
@option{--geox}.
+The second WCS axis is commonly used as declination in images.
 
 @item -b
 @itemx --brightness
-The brightness (sum of all pixel values), see @ref{Flux Brightness and
-magnitude}. For clumps, the ambient brightness (flux of river pixels around
-the clump multiplied by the area of the clump) is removed, see
-@option{--riverflux}. So the sum of all the clumps brightness in the clump
-catalog will be smaller than the total clump brightness in the
-@option{--clumpbrightness} column of the objects catalog.
+The brightness (sum of all pixel values), see @ref{Flux Brightness and 
magnitude}.
+For clumps, the ambient brightness (flux of river pixels around the clump 
multiplied by the area of the clump) is removed, see @option{--riverflux}.
+So the sum of all the clumps brightness in the clump catalog will be smaller 
than the total clump brightness in the @option{--clumpbrightness} column of the 
objects catalog.
 
-If no usable pixels are present over the clump or object (for example they
-are all blank), the returned value will be NaN (note that zero is
-meaningful).
+If no usable pixels are present over the clump or object (for example they are 
all blank), the returned value will be NaN (note that zero is meaningful).
 
 @item --brightnesserr
-The (@mymath{1\sigma}) error in measuring the brightness of objects or
-clumps.
+The (@mymath{1\sigma}) error in measuring the brightness of objects or clumps.
 
 @item --clumpbrightness
-[Objects] The total brightness of the clumps within an object. This is
-simply the sum of the pixels associated with clumps in the object. If no
-usable pixels are present over the clump or object (for example they are
-all blank), the stored value will be NaN (note that zero is meaningful).
+[Objects] The total brightness of the clumps within an object.
+This is simply the sum of the pixels associated with clumps in the object.
+If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --brightnessnoriver
-[Clumps] The Sky (not river) subtracted clump brightness. By definition,
-for the clumps, the average brightness of the rivers surrounding it are
-subtracted from it for a first order accounting for contamination by
-neighbors. In cases where you will be calculating the flux brightness
-difference later (one example below) the contamination will be (mostly)
-removed at that stage, which is why this column was added.
+[Clumps] The Sky (not river) subtracted clump brightness.
+By definition, for the clumps, the average brightness of the rivers 
surrounding it are subtracted from it for a first order accounting for 
contamination by neighbors.
+In cases where you will be calculating the flux brightness difference later 
(one example below) the contamination will be (mostly) removed at that stage, 
which is why this column was added.
 
-If no usable pixels are present over the clump or object (for example they
-are all blank), the stored value will be NaN (note that zero is
-meaningful).
+If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --mean
-The mean sky subtracted value of pixels within the object or clump. For
-clumps, the average river flux is subtracted from the sky subtracted
-mean.
+The mean sky subtracted value of pixels within the object or clump.
+For clumps, the average river flux is subtracted from the sky subtracted mean.
 
 @item --median
-The median sky subtracted value of pixels within the object or clump. For
-clumps, the average river flux is subtracted from the sky subtracted
-median.
+The median sky subtracted value of pixels within the object or clump.
+For clumps, the average river flux is subtracted from the sky subtracted 
median.
 
 @item -m
 @itemx --magnitude
@@ -16166,79 +14795,56 @@ The magnitude of clumps or objects, see 
@option{--brightness}.
 
 @item -e
 @itemx --magnitudeerr
-The magnitude error of clumps or objects. The magnitude error is calculated
-from the signal-to-noise ratio (see @option{--sn} and @ref{Quantifying
-measurement limits}). Note that until now this error assumes uncorrelated
-pixel values and also does not include the error in estimating the aperture
-(or error in generating the labeled image).
+The magnitude error of clumps or objects.
+The magnitude error is calculated from the signal-to-noise ratio (see 
@option{--sn} and @ref{Quantifying measurement limits}).
+Note that until now this error assumes uncorrelated pixel values and also does 
not include the error in estimating the aperture (or error in generating the 
labeled image).
 
 For now these factors have to be found by other means.
-@url{https://savannah.gnu.org/task/index.php?14124, Task 14124} has been
-defined for work on adding these sources of error too.
+@url{https://savannah.gnu.org/task/index.php?14124, Task 14124} has been 
defined for work on adding these sources of error too.
 
 @item --clumpsmagnitude
-[Objects] The magnitude of all clumps in this object, see
-@option{--clumpbrightness}.
+[Objects] The magnitude of all clumps in this object, see 
@option{--clumpbrightness}.
 
 @item --upperlimit
-The upper limit value (in units of the input image) for this object or
-clump. See @ref{Quantifying measurement limits} and @ref{Upper-limit
-settings} for a complete explanation. This is very important for the
-fainter and smaller objects in the image where the measured magnitudes are
-not reliable.
+The upper limit value (in units of the input image) for this object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
 @item --upperlimitmag
-The upper limit magnitude for this object or clump. See @ref{Quantifying
-measurement limits} and @ref{Upper-limit settings} for a complete
-explanation. This is very important for the fainter and smaller objects in
-the image where the measured magnitudes are not reliable.
+The upper limit magnitude for this object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
 @item --upperlimitonesigma
-The @mymath{1\sigma} upper limit value (in units of the input image) for
-this object or clump. See @ref{Quantifying measurement limits} and
-@ref{Upper-limit settings} for a complete explanation. When
-@option{--upnsigma=1}, this column's values will be the same as
-@option{--upperlimit}.
+The @mymath{1\sigma} upper limit value (in units of the input image) for this 
object or clump.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+When @option{--upnsigma=1}, this column's values will be the same as 
@option{--upperlimit}.
 
 @item --upperlimitsigma
-The position of the total brightness measured within the distribution of
-randomly placed upperlimit measurements in units of the distribution's
-@mymath{\sigma} or standard deviation. See @ref{Quantifying measurement
-limits} and @ref{Upper-limit settings} for a complete explanation.
+The position of the total brightness measured within the distribution of 
randomly placed upperlimit measurements in units of the distribution's 
@mymath{\sigma} or standard deviation.
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
 
 @item --upperlimitquantile
-The position of the total brightness measured within the distribution of
-randomly placed upperlimit measurements as a quantile (value between 0 or
-1). See @ref{Quantifying measurement limits} and @ref{Upper-limit settings}
-for a complete explanation. If the object is brighter than the brightest
-randomly placed profile, a value of @code{inf} is returned. If it is less
-than the minimum, a value of @code{-inf} is reported.
+The position of the total brightness measured within the distribution of 
randomly placed upperlimit measurements as a quantile (value between 0 or 1).
+See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
+If the object is brighter than the brightest randomly placed profile, a value 
of @code{inf} is returned.
+If it is less than the minimum, a value of @code{-inf} is reported.
 
 @item --upperlimitskew
 @cindex Skewness
-This column contains the non-parametric skew of the @mymath{\sigma}-clipped
-random distribution that was used to estimate the upper-limit
-magnitude. Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and
-@mymath{\sigma} as the standard deviation, the traditional definition of
-skewness is defined as: @mymath{(\mu-\nu)/\sigma}.
-
-This can be a good measure to see how much you can trust the random
-measurements, or in other words, how accurately the regions with signal
-have been masked/detected. If the skewness is strong (and to the positive),
-then you can tell that you have a lot of undetected signal in the dataset,
-and therefore that the upper-limit measurement (and other measurements) are
-not reliable.
+This column contains the non-parametric skew of the @mymath{\sigma}-clipped 
random distribution that was used to estimate the upper-limit magnitude.
+Taking @mymath{\mu} as the mean, @mymath{\nu} as the median and 
@mymath{\sigma} as the standard deviation, the traditional definition of 
skewness is defined as: @mymath{(\mu-\nu)/\sigma}.
+
+This can be a good measure to see how much you can trust the random 
measurements, or in other words, how accurately the regions with signal have 
been masked/detected. If the skewness is strong (and to the positive), then you 
can tell that you have a lot of undetected signal in the dataset, and therefore 
that the upper-limit measurement (and other measurements) are not reliable.
 
 @item --riverave
-[Clumps] The average brightness of the river pixels around this
-clump. River pixels were defined in Akhlaghi and Ichikawa 2015. In
-short they are the pixels immediately outside of the clumps. This
-value is used internally to find the brightness (or magnitude) and
-signal to noise ratio of the clumps. It can generally also be used as
-a scale to gauge the base (ambient) flux surrounding the clump. In
-case there was no river pixels, then this column will have the value
-of the Sky under the clump. So note that this value is @emph{not} sky
-subtracted.
+[Clumps] The average brightness of the river pixels around this clump.
+River pixels were defined in Akhlaghi and Ichikawa 2015.
+In short they are the pixels immediately outside of the clumps.
+This value is used internally to find the brightness (or magnitude) and signal 
to noise ratio of the clumps.
+It can generally also be used as a scale to gauge the base (ambient) flux 
surrounding the clump.
+In case there was no river pixels, then this column will have the value of the 
Sky under the clump.
+So note that this value is @emph{not} sky subtracted.
 
 @item --rivernum
 [Clumps] The number of river pixels around this clump, see
@@ -16246,18 +14852,16 @@ subtracted.
 
 @item -n
 @itemx --sn
-The Signal to noise ratio (S/N) of all clumps or objects. See Akhlaghi
-and Ichikawa (2015) for the exact equations used.
+The Signal to noise ratio (S/N) of all clumps or objects.
+See Akhlaghi and Ichikawa (2015) for the exact equations used.
 
 @item --sky
-The sky flux (per pixel) value under this object or clump. This is
-actually the mean value of all the pixels in the sky image that lie on
-the same position as the object or clump.
+The sky flux (per pixel) value under this object or clump.
+This is actually the mean value of all the pixels in the sky image that lie on 
the same position as the object or clump.
 
 @item --std
-The sky value standard deviation (per pixel) for this clump or object. This
-is the square root of the mean variance under the object, or the root mean
-square.
+The sky value standard deviation (per pixel) for this clump or object.
+This is the square root of the mean variance under the object, or the root 
mean square.
 
 @item -C
 @itemx --numclumps
@@ -16265,60 +14869,50 @@ square.
 
 @item -a
 @itemx --area
-The raw area (number of pixels) in any clump or object independent of what
-pixel it lies over (if it is NaN/blank or unused for example).
+The raw area (number of pixels) in any clump or object independent of what 
pixel it lies over (if it is NaN/blank or unused for example).
 
 @item --clumpsarea
 [Objects] The total area of all the clumps in this object.
 
 @item --weightarea
-The area (number of pixels) used in the flux weighted position
-calculations.
+The area (number of pixels) used in the flux weighted position calculations.
 
 @item --geoarea
-The area of all the pixels labeled with an object or clump. Note that
-unlike @option{--area}, pixel values are completely ignored in this
-column. For example, if a pixel value is blank, it won't be counted in
-@option{--area}, but will be counted here.
+The area of all the pixels labeled with an object or clump.
+Note that unlike @option{--area}, pixel values are completely ignored in this 
column.
+For example, if a pixel value is blank, it won't be counted in 
@option{--area}, but will be counted here.
 
 @item -A
 @itemx --semimajor
-The pixel-value weighted root mean square (RMS) along the semi-major axis
-of the profile (assuming it is an ellipse) in units of pixels. See
-@ref{Measuring elliptical parameters}.
+The pixel-value weighted root mean square (RMS) along the semi-major axis of 
the profile (assuming it is an ellipse) in units of pixels.
+See @ref{Measuring elliptical parameters}.
 
 @item -B
 @itemx --semiminor
-The pixel-value weighted root mean square (RMS) along the semi-minor axis
-of the profile (assuming it is an ellipse) in units of pixels. See
-@ref{Measuring elliptical parameters}.
+The pixel-value weighted root mean square (RMS) along the semi-minor axis of 
the profile (assuming it is an ellipse) in units of pixels.
+See @ref{Measuring elliptical parameters}.
 
 @item --axisratio
-The pixel-value weighted axis ratio (semi-minor/semi-major) of the object
-or clump.
+The pixel-value weighted axis ratio (semi-minor/semi-major) of the object or 
clump.
 
 @item -p
 @itemx --positionangle
 The pixel-value weighted angle of the semi-major axis with the first FITS
-axis in degrees. See @ref{Measuring elliptical parameters}.
+axis in degrees.
+See @ref{Measuring elliptical parameters}.
 
 @item --geosemimajor
-The geometric (ignoring pixel values) root mean square (RMS) along the
-semi-major axis of the profile, assuming it is an ellipse, in units of
-pixels.
+The geometric (ignoring pixel values) root mean square (RMS) along the 
semi-major axis of the profile, assuming it is an ellipse, in units of pixels.
 
 @item --geosemiminor
-The geometric (ignoring pixel values) root mean square (RMS) along the
-semi-minor axis of the profile, assuming it is an ellipse, in units of
-pixels.
+The geometric (ignoring pixel values) root mean square (RMS) along the 
semi-minor axis of the profile, assuming it is an ellipse, in units of pixels.
 
 @item --geoaxisratio
 The geometric (ignoring pixel values) axis ratio of the profile, assuming
 it is an ellipse.
 
 @item --geopositionangle
-The geometric (ignoring pixel values) angle of the semi-major axis
-with the first FITS axis in degrees.
+The geometric (ignoring pixel values) angle of the semi-major axis with the 
first FITS axis in degrees.
 
 @end table
 
@@ -16328,87 +14922,54 @@ with the first FITS axis in degrees.
 @node MakeCatalog output,  , MakeCatalog measurements, Invoking astmkcatalog
 @subsubsection MakeCatalog output
 
-After it has completed all the requested measurements (see @ref{MakeCatalog
-measurements}), MakeCatalog will store its measurements in table(s). If an
-output filename is given (see @option{--output} in @ref{Input output
-options}), the format of the table will be deduced from the name. When it
-isn't given, the input name will be appended with a @file{_cat} suffix (see
-@ref{Automatic output}) and its format will be determined from the
-@option{--tableformat} option, which is also discussed in @ref{Input output
-options}. @option{--tableformat} is also necessary when the requested
-output name is a FITS table (recall that FITS can accept ASCII and binary
-tables, see @ref{Table}).
-
-By default only a single catalog/table will be created for ``objects'',
-however, if @option{--clumpscat} is called, a secondary catalog/table will
-also be created. For more on ``objects'' and ``clumps'', see
-@ref{Segment}. In short, if you only have one set of labeled images, you
-don't have to worry about clumps (they are deactivated by default).
+After it has completed all the requested measurements (see @ref{MakeCatalog 
measurements}), MakeCatalog will store its measurements in table(s).
+If an output filename is given (see @option{--output} in @ref{Input output 
options}), the format of the table will be deduced from the name.
+When it isn't given, the input name will be appended with a @file{_cat} suffix 
(see @ref{Automatic output}) and its format will be determined from the 
@option{--tableformat} option, which is also discussed in @ref{Input output 
options}.
+@option{--tableformat} is also necessary when the requested output name is a 
FITS table (recall that FITS can accept ASCII and binary tables, see 
@ref{Table}).
+
+By default only a single catalog/table will be created for ``objects'', 
however, if @option{--clumpscat} is called, a secondary catalog/table will also 
be created.
+For more on ``objects'' and ``clumps'', see @ref{Segment}.
+In short, if you only have one set of labeled images, you don't have to worry 
about clumps (they are deactivated by default).
 
 The full list of MakeCatalog's output options are elaborated below.
 
 @table @option
 @item -C
 @itemx --clumpscat
-Do measurements on clumps and produce a second catalog (only devoted to
-clumps). When this option is given, MakeCatalog will also look for a
-secondary labeled dataset (identifying substructure) and produce a catalog
-from that. For more on the definition on ``clumps'', see @ref{Segment}.
-
-When the output is a FITS file, the objects and clumps catalogs/tables will
-be stored as multiple extensions of one FITS file. You can use @ref{Table}
-to inspect the column meta-data and contents in this case. However, in
-plain text format (see @ref{Gnuastro text table format}), it is only
-possible to keep one table per file. Therefore, if the output is a text
-file, two output files will be created, ending in @file{_o.txt} (for
-objects) and @file{_c.txt} (for clumps).
+Do measurements on clumps and produce a second catalog (only devoted to 
clumps).
+When this option is given, MakeCatalog will also look for a secondary labeled 
dataset (identifying substructure) and produce a catalog from that.
+For more on the definition on ``clumps'', see @ref{Segment}.
+
+When the output is a FITS file, the objects and clumps catalogs/tables will be 
stored as multiple extensions of one FITS file.
+You can use @ref{Table} to inspect the column meta-data and contents in this 
case.
+However, in plain text format (see @ref{Gnuastro text table format}), it is 
only possible to keep one table per file.
+Therefore, if the output is a text file, two output files will be created, 
ending in @file{_o.txt} (for objects) and @file{_c.txt} (for clumps).
 
 @item --noclumpsort
-Don't sort the clumps catalog based on object ID (only relevant with
-@option{--clumpscat}). This option will benefit the
-performance@footnote{The performance boost due to @option{--noclumpsort}
-can only be felt when there are a huge number of objects. Therefore, by
-default the output is sorted to avoid miss-understandings or bugs in the
-user's scripts when the user forgets to sort the outputs.} of MakeCatalog
-when it is run on multiple threads @emph{and} the position of the rows in
-the clumps catalog is irrelevant (for example you just want the
-number-counts).
-
-MakeCatalog does all its measurements on each @emph{object} independently
-and in parallel. As a result, while it is writing the measurements on each
-object's clumps, it doesn't know how many clumps there were in previous
-objects. Each thread will just fetch the first available row and write the
-information of clumps (in order) starting from that row. After all the
-measurements are done, by default (when this option isn't called),
-MakeCatalog will reorder/permute the clumps catalog to have both the object
-and clump ID in an ascending order.
-
-If you would like to order the catalog later (when its a plain text file),
-you can run the following command to sort the rows by object ID (and clump
-ID within each object), assuming they are respectively the first and second
-columns:
+Don't sort the clumps catalog based on object ID (only relevant with 
@option{--clumpscat}).
+This option will benefit the performance@footnote{The performance boost due to 
@option{--noclumpsort} can only be felt when there are a huge number of objects.
+Therefore, by default the output is sorted to avoid miss-understandings or 
bugs in the user's scripts when the user forgets to sort the outputs.} of 
MakeCatalog when it is run on multiple threads @emph{and} the position of the 
rows in the clumps catalog is irrelevant (for example you just want the 
number-counts).
+
+MakeCatalog does all its measurements on each @emph{object} independently and 
in parallel.
+As a result, while it is writing the measurements on each object's clumps, it 
doesn't know how many clumps there were in previous objects.
+Each thread will just fetch the first available row and write the information 
of clumps (in order) starting from that row.
+After all the measurements are done, by default (when this option isn't 
called), MakeCatalog will reorder/permute the clumps catalog to have both the 
object and clump ID in an ascending order.
+
+If you would like to order the catalog later (when its a plain text file), you 
can run the following command to sort the rows by object ID (and clump ID 
within each object), assuming they are respectively the first and second 
columns:
 
 @example
 $ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
 @end example
 
 @item --sfmagnsigma=FLT
-The median standard deviation (from a @command{MEDSTD} keyword in the Sky
-standard deviation image) will be multiplied by the value to this option
-and its magnitude will be reported in the comments of the output
-catalog. This value is a per-pixel value, not per object/clump and is not
-found over an area or aperture, like the common @mymath{5\sigma} values
-that are commonly reported as a measure of depth or the upper-limit
-measurements (see @ref{Quantifying measurement limits}).
+The median standard deviation (from a @command{MEDSTD} keyword in the Sky 
standard deviation image) will be multiplied by the value to this option and 
its magnitude will be reported in the comments of the output catalog.
+This value is a per-pixel value, not per object/clump and is not found over an 
area or aperture, like the common @mymath{5\sigma} values that are commonly 
reported as a measure of depth or the upper-limit measurements (see 
@ref{Quantifying measurement limits}).
 
 @item --sfmagarea=FLT
-Area (in arcseconds squared) to convert the per-pixel estimation of
-@option{--sfmagnsigma} in the comments section of the output tables. Note
-that this is just a unit conversion using the World Coordinate System (WCS)
-information in the input's header. It does not actually do any measurements
-on this area. For random measurements on any area, please use the
-upper-limit columns of MakeCatalog (see the discussion on upper-limit
-measurements in @ref{Quantifying measurement limits}).
+Area (in arcseconds squared) to convert the per-pixel estimation of 
@option{--sfmagnsigma} in the comments section of the output tables.
+Note that this is just a unit conversion using the World Coordinate System 
(WCS) information in the input's header.
+It does not actually do any measurements on this area.
+For random measurements on any area, please use the upper-limit columns of 
MakeCatalog (see the discussion on upper-limit measurements in @ref{Quantifying 
measurement limits}).
 @end table
 
 
@@ -16417,17 +14978,13 @@ measurements in @ref{Quantifying measurement limits}).
 @node Match,  , MakeCatalog, Data analysis
 @section Match
 
-Data can come come from different telescopes, filters, software and even
-different configurations for a single software. As a result, one of the
-primary things to do after generating catalogs from each of these sources
-(for example with @ref{MakeCatalog}), is to find which sources in one
-catalog correspond to which in the other(s). In other words, to `match' the
-two catalogs with each other.
+Data can come come from different telescopes, filters, software and even 
different configurations for a single software.
+As a result, one of the primary things to do after generating catalogs from 
each of these sources (for example with @ref{MakeCatalog}), is to find which 
sources in one catalog correspond to which in the other(s).
+In other words, to `match' the two catalogs with each other.
 
-Gnuastro's Match program is in charge of such operations. The nearest
-objects in the two catalogs, within the given aperture, will be found and
-given as output. The aperture can be a circle or an ellipse with any
-orientation.
+Gnuastro's Match program is in charge of such operations.
+The nearest objects in the two catalogs, within the given aperture, will be 
found and given as output.
+The aperture can be a circle or an ellipse with any orientation.
 
 @menu
 * Invoking astmatch::           Inputs, outputs and options of Match
@@ -16436,9 +14993,8 @@ orientation.
 @node Invoking astmatch,  , Match, Match
 @subsection Invoking Match
 
-When given two catalogs, Match finds the rows that are nearest to each
-other within an input aperture. The executable name is @file{astmatch} with
-the following general template
+When given two catalogs, Match finds the rows that are nearest to each other 
within an input aperture.
+The executable name is @file{astmatch} with the following general template
 
 @example
 $ astmatch [OPTION ...] input-1 input-2
@@ -16475,205 +15031,149 @@ $ astmatch --ccol1=RA,DEC --ccol2=RA_D,DEC_D 
--aperture=0.5/3600  \
            in1.fits in2.fits
 @end example
 
-Match will find the rows that are nearest to each other in two catalogs
-(given some coordinate columns). Therefore two catalogs are necessary for
-input. However, they don't necessarily have to be files: 1) the first
-catalog can also come from the standard input (for example a pipe, see
-@ref{Standard input}); 2) when only one point is needed, you can use the
-@option{--coord} option to avoid creating a file for the second
-catalog. When the inputs are files, they can be plain text tables or FITS
-tables, for more see @ref{Tables}.
-
-Match follows the same basic behavior of all Gnuastro programs as fully
-described in @ref{Common program behavior}. If the first input is a FITS
-file, the common @option{--hdu} option (see @ref{Input output options})
-should be used to identify the extension. When the second input is FITS,
-the extension must be specified with @option{--hdu2}.
-
-When @option{--quiet} is not called, Match will print the number of matches
-found in standard output (on the command-line). When matches are found, by
-default, the output file(s) will be the re-arranged input tables such that
-the rows match each other: both output tables will have the same number of
-rows which are matched with each other. If @option{--outcols} is called,
-the output is a single table with rows chosen from either of the two inputs
-in any order. If the @option{--logasoutput} option is called, the output
-will be a single table with the contents of the log file, see below. If no
-matches are found, the columns of the output table(s) will have zero rows
-(with proper meta-data).
-
-If no output file name is given with the @option{--output} option, then
-automatic output @ref{Automatic output} will be used to determine the
-output name(s). Depending on @option{--tableformat} (see @ref{Input output
-options}), the output will then be a (possibly multi-extension) FITS file
-or (possibly two) plain text file(s). When the output is a FITS file, the
-default re-arranged inputs will be two extensions of the output FITS
-file. With @option{--outcols} and @option{--logasoutput}, the FITS output
-will be a single table (in one extension).
-
-When the @option{--log} option is called (see @ref{Operating mode
-options}), and there was a match, Match will also create a file named
-@file{astmatch.fits} (or @file{astmatch.txt}, depending on
-@option{--tableformat}, see @ref{Input output options}) in the directory it
-is run in. This log table will have three columns. The first and second
-columns show the matching row/record number (counting from 1) of the first
-and second input catalogs respectively. The third column is the distance
-between the two matched positions. The units of the distance are the same
-as the given coordinates (given the possible ellipticity, see description
-of @option{--aperture} below). When @option{--logasoutput} is called, no
-log file (with a fixed name) will be created. In this case, the output file
-(possibly given by the @option{--output} option) will have the contents of
-this log file.
+Match will find the rows that are nearest to each other in two catalogs (given 
some coordinate columns).
+Therefore two catalogs are necessary for input.
+However, they don't necessarily have to be files:
+1) the first catalog can also come from the standard input (for example a 
pipe, see @ref{Standard input});
+2) when only one point is needed, you can use the @option{--coord} option to 
avoid creating a file for the second catalog.
+When the inputs are files, they can be plain text tables or FITS tables, for 
more see @ref{Tables}.
+
+Match follows the same basic behavior of all Gnuastro programs as fully 
described in @ref{Common program behavior}.
+If the first input is a FITS file, the common @option{--hdu} option (see 
@ref{Input output options}) should be used to identify the extension.
+When the second input is FITS, the extension must be specified with 
@option{--hdu2}.
+
+When @option{--quiet} is not called, Match will print the number of matches 
found in standard output (on the command-line).
+When matches are found, by default, the output file(s) will be the re-arranged 
input tables such that the rows match each other: both output tables will have 
the same number of rows which are matched with each other.
+If @option{--outcols} is called, the output is a single table with rows chosen 
from either of the two inputs in any order.
+If the @option{--logasoutput} option is called, the output will be a single 
table with the contents of the log file, see below.
+If no matches are found, the columns of the output table(s) will have zero 
rows (with proper meta-data).
+
+If no output file name is given with the @option{--output} option, then 
automatic output @ref{Automatic output} will be used to determine the output 
name(s).
+Depending on @option{--tableformat} (see @ref{Input output options}), the 
output will then be a (possibly multi-extension) FITS file or (possibly two) 
plain text file(s).
+When the output is a FITS file, the default re-arranged inputs will be two 
extensions of the output FITS file.
+With @option{--outcols} and @option{--logasoutput}, the FITS output will be a 
single table (in one extension).
+
+When the @option{--log} option is called (see @ref{Operating mode options}), 
and there was a match, Match will also create a file named @file{astmatch.fits} 
(or @file{astmatch.txt}, depending on @option{--tableformat}, see @ref{Input 
output options}) in the directory it is run in.
+This log table will have three columns.
+The first and second columns show the matching row/record number (counting 
from 1) of the first and second input catalogs respectively.
+The third column is the distance between the two matched positions.
+The units of the distance are the same as the given coordinates (given the 
possible ellipticity, see description of @option{--aperture} below).
+When @option{--logasoutput} is called, no log file (with a fixed name) will be 
created.
+In this case, the output file (possibly given by the @option{--output} option) 
will have the contents of this log file.
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: As described above, when
-@option{--logasoutput} is not called, the Log file has a fixed name for all
-calls to Match. Therefore if a separate log is requested in two
-simultaneous calls to Match in the same directory, Match will try to write
-to the same file. This will cause problems like unreasonable log file,
-undefined behavior, or a crash.
+@strong{@option{--log} isn't thread-safe}: As described above, when 
@option{--logasoutput} is not called, the Log file has a fixed name for all 
calls to Match.
+Therefore if a separate log is requested in two simultaneous calls to Match in 
the same directory, Match will try to write to the same file.
+This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 @end cartouche
 
 @table @option
 @item -H STR
 @itemx --hdu2=STR
-The extension/HDU of the second input if it is a FITS file. When it isn't a
-FITS file, this option's value is ignored. For the first input, the common
-option @option{--hdu} must be used.
+The extension/HDU of the second input if it is a FITS file.
+When it isn't a FITS file, this option's value is ignored.
+For the first input, the common option @option{--hdu} must be used.
 
 @item --outcols=STR
-Columns (from both inputs) to write into a single matched table output. The
-value to @code{--outcols} must be a comma-separated list of strings. The
-first character of each string specifies the input catalog: @option{a} for
-the first and @option{b} for the second. The rest of the characters of the
-string will be directly used to identify the proper column(s) in the
-respective table. See @ref{Selecting table columns} for how columns can be
-specified in Gnuastro.
+Columns (from both inputs) to write into a single matched table output.
+The value to @code{--outcols} must be a comma-separated list of strings.
+The first character of each string specifies the input catalog: @option{a} for 
the first and @option{b} for the second.
+The rest of the characters of the string will be directly used to identify the 
proper column(s) in the respective table.
+See @ref{Selecting table columns} for how columns can be specified in Gnuastro.
 
-For example the output of @option{--outcols=a1,bRA,bDEC} will have three
-columns: the first column of the first input, along with the @option{RA}
-and @option{DEC} columns of the second input.
+For example the output of @option{--outcols=a1,bRA,bDEC} will have three 
columns: the first column of the first input, along with the @option{RA} and 
@option{DEC} columns of the second input.
 
-If the string after @option{a} or @option{b} is @option{_all}, then all the
-columns of the respective input file will be written in the output. For
-example the command below will print all the input columns from the first
-catalog along with the 5th column from the second:
+If the string after @option{a} or @option{b} is @option{_all}, then all the 
columns of the respective input file will be written in the output.
+For example the command below will print all the input columns from the first 
catalog along with the 5th column from the second:
 
 @example
 $ astmatch a.fits b.fits --outcols=a_all,b5
 @end example
 
-@code{_all} can be used multiple times, possibly on both inputs. Tip: if an
-input's column is called @code{_all} (an unlikely name!) and you don't want
-all the columns from that table the output, use its column number to avoid
-confusion.
+@code{_all} can be used multiple times, possibly on both inputs.
+Tip: if an input's column is called @code{_all} (an unlikely name!) and you 
don't want all the columns from that table the output, use its column number to 
avoid confusion.
 
-Another example is given in the one-line examples above. Compared to the
-default case (where two tables with all their columns) are saved
-separately, using this option is much faster: it will only read and
-re-arrange the necessary columns and it will write a single output
-table. Combined with regular expressions in large tables, this can be a
-very powerful and convenient way to merge various tables into one.
+Another example is given in the one-line examples above.
+Compared to the default case (where two tables with all their columns) are 
saved separately, using this option is much faster: it will only read and 
re-arrange the necessary columns and it will write a single output table.
+Combined with regular expressions in large tables, this can be a very powerful 
and convenient way to merge various tables into one.
 
-When @option{--coord} is given, no second catalog will be read. The second
-catalog will be created internally based on the values given to
-@option{--coord}. So column names aren't defined and you can only request
-integer column numbers that are less than the number of coordinates given
-to @option{--coord}. For example if you want to find the row matching RA of
-1.2345 and Dec of 6.7890, then you should use
-@option{--coord=1.2345,6.7890}. But when using @option{--outcols}, you
-can't give @code{bRA}, or @code{b25}.
+When @option{--coord} is given, no second catalog will be read.
+The second catalog will be created internally based on the values given to 
@option{--coord}.
+So column names aren't defined and you can only request integer column numbers 
that are less than the number of coordinates given to @option{--coord}.
+For example if you want to find the row matching RA of 1.2345 and Dec of 
6.7890, then you should use @option{--coord=1.2345,6.7890}.
+But when using @option{--outcols}, you can't give @code{bRA}, or @code{b25}.
 
 @item -l
 @itemx --logasoutput
-The output file will have the contents of the log file: indexes in the two
-catalogs that match with each other along with their distance. See
-description above. When this option is called, a log file called
-@file{astmatch.txt} will not be created. With this option, the default
-output behavior (two tables containing the re-arranged inputs) will be
+The output file will have the contents of the log file: indexes in the two 
catalogs that match with each other along with their distance.
+See description above.
+When this option is called, a log file called @file{astmatch.txt} will not be 
created.
+With this option, the default output behavior (two tables containing the 
re-arranged inputs) will be
 
 @item --notmatched
-Write the non-matching rows into the outputs, not the matched ones. Note
-that with this option, the two output tables will not necessarily have the
-same number of rows. Therefore, this option cannot be called with
-@option{--outcols}. @option{--outcols} prints mixed columns from both
-inputs, so they must all have the same number of elements and must
-correspond to each other.
+Write the non-matching rows into the outputs, not the matched ones.
+Note that with this option, the two output tables will not necessarily have 
the same number of rows.
+Therefore, this option cannot be called with @option{--outcols}.
+@option{--outcols} prints mixed columns from both inputs, so they must all 
have the same number of elements and must correspond to each other.
 
 @item -c INT/STR[,INT/STR]
 @itemx --ccol1=INT/STR[,INT/STR]
-The coordinate columns of the first input. The number of dimensions for the
-match is determined by the number of comma-separated values given to this
-option. The values can be the column number (counting from 1), exact column
-name or a regular expression. For more, see @ref{Selecting table
-columns}. See the one-line examples above for some usages of this option.
+The coordinate columns of the first input.
+The number of dimensions for the match is determined by the number of 
comma-separated values given to this option.
+The values can be the column number (counting from 1), exact column name or a 
regular expression.
+For more, see @ref{Selecting table columns}.
+See the one-line examples above for some usages of this option.
 
 @item -C INT/STR[,INT/STR]
 @itemx --ccol2=INT/STR[,INT/STR]
-The coordinate columns of the second input. See the example in
-@option{--ccol1} for more.
+The coordinate columns of the second input.
+See the example in @option{--ccol1} for more.
 
 @item -d FLT[,FLT]
 @itemx --coord=FLT[,FLT]
-Manually specify the coordinates to match against the given catalog. With
-this option, Match will not look for a second input file/table and will
-directly use the coordinates given to this option.
-
-When this option is called, the output changes in the following ways: 1)
-when @option{--outcols} is specified, for the second input, it can only
-accept integer numbers that are less than the number of values given to
-this option, see description of that option for more. 2) By default (when
-@option{--outcols} isn't used), only the matching row of the first table
-will be output (a single file), not two separate files (one for each
-table).
-
-This option is good when you have a (large) catalog and only want to match
-a single coordinate to it (for example to find the nearest catalog entry to
-your desired point). With this option, you can write the coordinates on the
-command-line and thus avoid the need to make a single-row file.
+Manually specify the coordinates to match against the given catalog.
+With this option, Match will not look for a second input file/table and will 
directly use the coordinates given to this option.
+
+When this option is called, the output changes in the following ways:
+1) when @option{--outcols} is specified, for the second input, it can only 
accept integer numbers that are less than the number of values given to this 
option, see description of that option for more.
+2) By default (when @option{--outcols} isn't used), only the matching row of 
the first table will be output (a single file), not two separate files (one for 
each table).
+
+This option is good when you have a (large) catalog and only want to match a 
single coordinate to it (for example to find the nearest catalog entry to your 
desired point).
+With this option, you can write the coordinates on the command-line and thus 
avoid the need to make a single-row file.
 
 @item -a FLT[,FLT[,FLT]]
 @itemx --aperture=FLT[,FLT[,FLT]]
-Parameters of the aperture for matching. The values given to this option
-can be fractions, for example when the position columns are in units of
-degrees, @option{1/3600} can be used to ask for one arcsecond. The
-interpretation of the values depends on the requested dimensions
-(determined from @option{--ccol1} and @code{--ccol2}) and how many values
-are given to this option.
+Parameters of the aperture for matching.
+The values given to this option can be fractions, for example when the 
position columns are in units of degrees, @option{1/3600} can be used to ask 
for one arcsecond.
+The interpretation of the values depends on the requested dimensions 
(determined from @option{--ccol1} and @code{--ccol2}) and how many values are 
given to this option.
 
 @table @asis
 @item 1D match
-The aperture/interval can only take one value: half of the interval around
-each point (maximum distance from each point).
+The aperture/interval can only take one value: half of the interval around 
each point (maximum distance from each point).
 
 @item 2D match
-In a 2D match, the aperture can be a circle, an ellipse aligned in the axes
-or an ellipse with a rotated major axis. To simply the usage, you can
-determine the shape based on the number of free parameters for each.
+In a 2D match, the aperture can be a circle, an ellipse aligned in the axes or 
an ellipse with a rotated major axis.
+To simply the usage, you can determine the shape based on the number of free 
parameters for each.
 @table @asis
 @item 1 number
-For example @option{--aperture=2}. The aperture will be a circle of the
-given radius. The value will be in the same units as the columns in
-@option{--ccol1} and @option{--ccol2}).
+For example @option{--aperture=2}.
+The aperture will be a circle of the given radius.
+The value will be in the same units as the columns in @option{--ccol1} and 
@option{--ccol2}).
 
 @item 2 numbers
-For example @option{--aperture=3,4e-10}. The aperture will be an ellipse
-(if the two numbers are different) with the respective value along each
-dimension. The numbers are in units of the first and second axis. In the
-example above, the semi-axis value along the first axis will be 3 (in units
-of the first coordinate) and along the second axis will be
-@mymath{4\times10^{-10}} (in units of the second coordinate). Such values
-can happen if you are comparing catalogs of a spectra for example. If more
-than one object exists in the aperture, the nearest will be found along the
-major axis as described in @ref{Defining an ellipse}.
+For example @option{--aperture=3,4e-10}.
+The aperture will be an ellipse (if the two numbers are different) with the 
respective value along each dimension.
+The numbers are in units of the first and second axis.
+In the example above, the semi-axis value along the first axis will be 3 (in 
units of the first coordinate) and along the second axis will be 
@mymath{4\times10^{-10}} (in units of the second coordinate).
+Such values can happen if you are comparing catalogs of a spectra for example.
+If more than one object exists in the aperture, the nearest will be found 
along the major axis as described in @ref{Defining an ellipse}.
 
 @item 3 numbers
-For example @option{--aperture=2,0.6,30}. The aperture will be an ellipse
-(if the second value is not 1). The first number is the semi-major axis,
-the second is the axis ratio and the third is the position angle (in
-degrees). If multiple matches are found within the ellipse, the distance
-(to find the nearest) is calculated along the major axis in the elliptical
-space, see @ref{Defining an ellipse}.
+For example @option{--aperture=2,0.6,30}.
+The aperture will be an ellipse (if the second value is not 1).
+The first number is the semi-major axis, the second is the axis ratio and the 
third is the position angle (in degrees).
+If multiple matches are found within the ellipse, the distance (to find the 
nearest) is calculated along the major axis in the elliptical space, see 
@ref{Defining an ellipse}.
 @end table
 @end table
 @end table
@@ -16695,11 +15195,8 @@ space, see @ref{Defining an ellipse}.
 
 @cindex Fitting
 @cindex Modeling
-In order to fully understand observations after initial analysis on
-the image, it is very important to compare them with the existing
-models to be able to further understand both the models and the
-data. The tools in this chapter create model galaxies and will provide
-2D fittings to be able to understand the detections.
+In order to fully understand observations after initial analysis on the image, 
it is very important to compare them with the existing models to be able to 
further understand both the models and the data.
+The tools in this chapter create model galaxies and will provide 2D fittings 
to be able to understand the detections.
 
 @menu
 * MakeProfiles::                Making mock galaxies and stars.
@@ -16714,59 +15211,40 @@ data. The tools in this chapter create model galaxies 
and will provide
 
 @cindex Checking detection algorithms
 @pindex @r{MakeProfiles (}astmkprof@r{)}
-MakeProfiles will create mock astronomical profiles from a catalog,
-either individually or together in one output image. In data analysis,
-making a mock image can act like a calibration tool, through which you
-can test how successfully your detection technique is able to detect a
-known set of objects. There are commonly two aspects to detecting: the
-detection of the fainter parts of bright objects (which in the case of
-galaxies fade into the noise very slowly) or the complete detection of
-an over-all faint object. Making mock galaxies is the most accurate
-(and idealistic) way these two aspects of a detection algorithm can be
-tested. You also need mock profiles in fitting known functional
-profiles with observations.
-
-MakeProfiles was initially built for extra galactic studies, so
-currently the only astronomical objects it can produce are stars and
-galaxies. We welcome the simulation of any other astronomical
-object. The general outline of the steps that MakeProfiles takes are
-the following:
+MakeProfiles will create mock astronomical profiles from a catalog, either 
individually or together in one output image.
+In data analysis, making a mock image can act like a calibration tool, through 
which you can test how successfully your detection technique is able to detect 
a known set of objects.
+There are commonly two aspects to detecting: the detection of the fainter 
parts of bright objects (which in the case of galaxies fade into the noise very 
slowly) or the complete detection of an over-all faint object.
+Making mock galaxies is the most accurate (and idealistic) way these two 
aspects of a detection algorithm can be tested.
+You also need mock profiles in fitting known functional profiles with 
observations.
+
+MakeProfiles was initially built for extra galactic studies, so currently the 
only astronomical objects it can produce are stars and galaxies.
+We welcome the simulation of any other astronomical object.
+The general outline of the steps that MakeProfiles takes are the following:
 
 @enumerate
 
 @item
-Build the full profile out to its truncation radius in a possibly
-over-sampled array.
+Build the full profile out to its truncation radius in a possibly over-sampled 
array.
 
 @item
-Multiply all the elements by a fixed constant so its total magnitude
-equals the desired total magnitude.
+Multiply all the elements by a fixed constant so its total magnitude equals 
the desired total magnitude.
 
 @item
-If @option{--individual} is called, save the array for each profile to
-a FITS file.
+If @option{--individual} is called, save the array for each profile to a FITS 
file.
 
 @item
-If @option{--nomerged} is not called, add the overlapping pixels of
-all the created profiles to the output image and abort.
+If @option{--nomerged} is not called, add the overlapping pixels of all the 
created profiles to the output image and abort.
 
 @end enumerate
 
-Using input values, MakeProfiles adds the World Coordinate System
-(WCS) headers of the FITS standard to all its outputs (except PSF
-images!). For a simple test on a set of mock galaxies in one image,
-there is no need for the third step or the WCS information.
+Using input values, MakeProfiles adds the World Coordinate System (WCS) 
headers of the FITS standard to all its outputs (except PSF images!).
+For a simple test on a set of mock galaxies in one image, there is no need for 
the third step or the WCS information.
 
 @cindex Transform image
 @cindex Lensing simulations
 @cindex Image transformations
-However in complicated simulations like weak lensing simulations,
-where each galaxy undergoes various types of individual
-transformations based on their position, those transformations can be
-applied to the different individual images with other programs. After
-all the transformations are applied, using the WCS information in each
-individual profile image, they can be merged into one output image for
-convolution and adding noise.
+However in complicated simulations like weak lensing simulations, where each 
galaxy undergoes various types of individual transformations based on their 
position, those transformations can be applied to the different individual 
images with other programs.
+After all the transformations are applied, using the WCS information in each 
individual profile image, they can be merged into one output image for 
convolution and adding noise.
 
 @menu
 * Modeling basics::             Astronomical modeling basics.
@@ -16781,10 +15259,8 @@ convolution and adding noise.
 @node Modeling basics, If convolving afterwards, MakeProfiles, MakeProfiles
 @subsection Modeling basics
 
-In the subsections below, first a review of some very basic
-information and concepts behind modeling a real astronomical image is
-given. You can skip this subsection if you are already sufficiently
-familiar with these concepts.
+In the subsections below, first a review of some very basic information and 
concepts behind modeling a real astronomical image is given.
+You can skip this subsection if you are already sufficiently familiar with 
these concepts.
 
 @menu
 * Defining an ellipse::         An ellipse on a pixelated grid.
@@ -16803,64 +15279,41 @@ familiar with these concepts.
 @cindex Ellipse
 @cindex Axis ratio
 @cindex Position angle
-The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on
-an ellipse. Therefore, in this section we'll start defining an ellipse on a
-pixelated 2D surface. Labeling the major axis of an ellipse @mymath{a}, and
-its minor axis with @mymath{b}, the @emph{axis ratio} is defined as:
-@mymath{q\equiv b/a}. The major axis of an ellipse can be aligned in any
-direction, therefore the angle of the major axis with respect to the
-horizontal axis of the image is defined to be the @emph{position angle} of
-the ellipse and in this book, we show it with @mymath{\theta}.
+The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an 
ellipse.
+Therefore, in this section we'll start defining an ellipse on a pixelated 2D 
surface.
+Labeling the major axis of an ellipse @mymath{a}, and its minor axis with 
@mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
+The major axis of an ellipse can be aligned in any direction, therefore the 
angle of the major axis with respect to the horizontal axis of the image is 
defined to be the @emph{position angle} of the ellipse and in this book, we 
show it with @mymath{\theta}.
 
 @cindex Radial profile on ellipse
-Our aim is to put a radial profile of any functional form @mymath{f(r)}
-over an ellipse. Hence we need to associate a radius/distance to every
-point in space. Let's define the radial distance @mymath{r_{el}} as the
-distance on the major axis to the center of an ellipse which is located at
-@mymath{i_c} and @mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}). We
-want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the
-image coordinate system) from the center of the ellipse with axis ratio
-@mymath{q} and position angle @mymath{\theta}. First the coordinate system
-is rotated@footnote{Do not confuse the signs of @mymath{sin} with the
-rotation matrix defined in @ref{Warping basics}. In that equation, the
-point is rotated, here the coordinates are rotated and the point is fixed.}
-by @mymath{\theta} to get the new rotated coordinates of that point
-@mymath{(i_r,j_r)}:
+Our aim is to put a radial profile of any functional form @mymath{f(r)} over 
an ellipse.
+Hence we need to associate a radius/distance to every point in space.
+Let's define the radial distance @mymath{r_{el}} as the distance on the major 
axis to the center of an ellipse which is located at @mymath{i_c} and 
@mymath{j_c} (in other words @mymath{r_{el}\equiv{a}}).
+We want to find @mymath{r_{el}} of a point located at @mymath{(i,j)} (in the 
image coordinate system) from the center of the ellipse with axis ratio 
@mymath{q} and position angle @mymath{\theta}.
+First the coordinate system is rotated@footnote{Do not confuse the signs of 
@mymath{sin} with the rotation matrix defined in @ref{Warping basics}.
+In that equation, the point is rotated, here the coordinates are rotated and 
the point is fixed.} by @mymath{\theta} to get the new rotated coordinates of 
that point @mymath{(i_r,j_r)}:
 
 @dispmath{i_r(i,j)=+(i_c-i)\cos\theta+(j_c-j)\sin\theta}
 @dispmath{j_r(i,j)=-(i_c-i)\sin\theta+(j_c-j)\cos\theta}
 
 @cindex Elliptical distance
-@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1}
-and that we defined @mymath{r_{el}\equiv{a}}. Hence, multiplying all
-elements of the ellipse definition with @mymath{r_{el}^2} we get the
-elliptical distance at this point point located:
-@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}. To place the radial profiles
-explained below over an ellipse, @mymath{f(r_{el})} is calculated based on
-the functional radial profile desired.
+@noindent Recall that an ellipse is defined by @mymath{(i_r/a)^2+(j_r/b)^2=1} 
and that we defined @mymath{r_{el}\equiv{a}}.
+Hence, multiplying all elements of the ellipse definition with 
@mymath{r_{el}^2} we get the elliptical distance at this point point located: 
@mymath{r_{el}=\sqrt{i_r^2+(j_r/q)^2}}.
+To place the radial profiles explained below over an ellipse, 
@mymath{f(r_{el})} is calculated based on the functional radial profile desired.
 
 @cindex Breadth first search
 @cindex Inside-out construction
 @cindex Making profiles pixel by pixel
 @cindex Pixel by pixel making of profiles
-MakeProfiles builds the profile starting from the nearest element (pixel in
-an image) in the dataset to the profile center. The profile value is
-calculated for that central pixel using monte carlo integration, see
-@ref{Sampling from a function}. The next pixel is the next nearest neighbor
-to the central pixel as defined by @mymath{r_{el}}. This process goes on
-until the profile is fully built upto the truncation radius. This is done
-fairly efficiently using a breadth first parsing
-strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}}
-which is implemented through an ordered linked list.
-
-Using this approach, we build the profile by expanding the
-circumference. Not one more extra pixel has to be checked (the calculation
-of @mymath{r_{el}} from above is not cheap in CPU terms). Another
-consequence of this strategy is that extending MakeProfiles to three
-dimensions becomes very simple: only the neighbors of each pixel have to be
-changed. Everything else after that (when the pixel index and its radial
-profile have entered the linked list) is the same, no matter the number of
-dimensions we are dealing with.
+MakeProfiles builds the profile starting from the nearest element (pixel in an 
image) in the dataset to the profile center.
+The profile value is calculated for that central pixel using monte carlo 
integration, see @ref{Sampling from a function}.
+The next pixel is the next nearest neighbor to the central pixel as defined by 
@mymath{r_{el}}.
+This process goes on until the profile is fully built upto the truncation 
radius.
+This is done fairly efficiently using a breadth first parsing 
strategy@footnote{@url{http://en.wikipedia.org/wiki/Breadth-first_search}} 
which is implemented through an ordered linked list.
+
+Using this approach, we build the profile by expanding the circumference.
+Not one more extra pixel has to be checked (the calculation of @mymath{r_{el}} 
from above is not cheap in CPU terms).
+Another consequence of this strategy is that extending MakeProfiles to three 
dimensions becomes very simple: only the neighbors of each pixel have to be 
changed.
+Everything else after that (when the pixel index and its radial profile have 
entered the linked list) is the same, no matter the number of dimensions we are 
dealing with.
 
 
 
@@ -16872,29 +15325,24 @@ dimensions we are dealing with.
 @cindex Diffraction limited
 @cindex Point spread function
 @cindex Spread of a point source
-Assume we have a `point' source, or a source that is far smaller
-than the maximum resolution (a pixel). When we take an image of it, it
-will `spread' over an area. To quantify that spread, we can define a
-`function'. This is how the point spread function or the PSF of an
-image is defined. This `spread' can have various causes, for example
-in ground based astronomy, due to the atmosphere. In practice we can
-never surpass the `spread' due to the diffraction of the lens
-aperture. Various other effects can also be quantified through a PSF.
-For example, the simple fact that we are sampling in a discrete space,
-namely the pixels, also produces a very small `spread' in the image.
+Assume we have a `point' source, or a source that is far smaller than the 
maximum resolution (a pixel).
+When we take an image of it, it will `spread' over an area.
+To quantify that spread, we can define a `function'.
+This is how the point spread function or the PSF of an image is defined.
+This `spread' can have various causes, for example in ground based astronomy, 
due to the atmosphere.
+In practice we can never surpass the `spread' due to the diffraction of the 
lens aperture.
+Various other effects can also be quantified through a PSF.
+For example, the simple fact that we are sampling in a discrete space, namely 
the pixels, also produces a very small `spread' in the image.
 
 @cindex Blur image
 @cindex Convolution
 @cindex Image blurring
 @cindex PSF image size
-Convolution is the mathematical process by which we can apply a `spread' to
-an image, or in other words blur the image, see @ref{Convolution
-process}. The Brightness of an object should remain unchanged after
-convolution, see @ref{Flux Brightness and magnitude}. Therefore, it is
-important that the sum of all the pixels of the PSF be unity. The PSF image
-also has to have an odd number of pixels on its sides so one pixel can be
-defined as the center. In MakeProfiles, the PSF can be set by the two
-methods explained below.
+Convolution is the mathematical process by which we can apply a `spread' to an 
image, or in other words blur the image, see @ref{Convolution process}.
+The Brightness of an object should remain unchanged after convolution, see 
@ref{Flux Brightness and magnitude}.
+Therefore, it is important that the sum of all the pixels of the PSF be unity.
+The PSF image also has to have an odd number of pixels on its sides so one 
pixel can be defined as the center.
+In MakeProfiles, the PSF can be set by the two methods explained below.
 
 @table @asis
 
@@ -16903,47 +15351,37 @@ methods explained below.
 @cindex PSF width
 @cindex Parametric PSFs
 @cindex Full Width at Half Maximum
-A known mathematical function is used to make the PSF. In this case,
-only the parameters to define the functions are necessary and
-MakeProfiles will make a PSF based on the given parameters for each
-function. In both cases, the center of the profile has to be exactly
-in the middle of the central pixel of the PSF (which is automatically
-done by MakeProfiles). When talking about the PSF, usually, the full
-width at half maximum or FWHM is used as a scale of the width of the
-PSF.
+A known mathematical function is used to make the PSF.
+In this case, only the parameters to define the functions are necessary and 
MakeProfiles will make a PSF based on the given parameters for each function.
+In both cases, the center of the profile has to be exactly in the middle of 
the central pixel of the PSF (which is automatically done by MakeProfiles).
+When talking about the PSF, usually, the full width at half maximum or FWHM is 
used as a scale of the width of the PSF.
 
 @table @cite
 @item Gaussian
 @cindex Gaussian distribution
-In the older papers, and to a lesser extent even today, some
-researchers use the 2D Gaussian function to approximate the PSF of
-ground based images. In its most general form, a Gaussian function can
-be written as:
+In the older papers, and to a lesser extent even today, some researchers use 
the 2D Gaussian function to approximate the PSF of ground based images.
+In its most general form, a Gaussian function can be written as:
 
 @dispmath{f(r)=a \exp \left( -(x-\mu)^2 \over 2\sigma^2 \right)+d}
 
-Since the center of the profile is pre-defined, @mymath{\mu} and
-@mymath{d} are constrained. @mymath{a} can also be found because the
-function has to be normalized. So the only important parameter for
-MakeProfiles is the @mymath{\sigma}. In the Gaussian function we have
-this relation between the FWHM and @mymath{\sigma}:
+Since the center of the profile is pre-defined, @mymath{\mu} and @mymath{d} 
are constrained.
+@mymath{a} can also be found because the function has to be normalized.
+So the only important parameter for MakeProfiles is the @mymath{\sigma}.
+In the Gaussian function we have this relation between the FWHM and 
@mymath{\sigma}:
 
 @cindex Gaussian FWHM
 @dispmath{\rm{FWHM}_g=2\sqrt{2\ln{2}}\sigma \approx 2.35482\sigma}
 
 @item Moffat
 @cindex Moffat function
-The Gaussian profile is much sharper than the images taken from stars
-on photographic plates or CCDs. Therefore in 1969, Moffat proposed
-this functional form for the image of stars:
+The Gaussian profile is much sharper than the images taken from stars on 
photographic plates or CCDs.
+Therefore in 1969, Moffat proposed this functional form for the image of stars:
 
 @dispmath{f(r)=a \left[ 1+\left( r\over \alpha \right)^2 \right]^{-\beta}}
 
 @cindex Moffat beta
-Again, @mymath{a} is constrained by the normalization, therefore two
-parameters define the shape of the Moffat function: @mymath{\alpha} and
-@mymath{\beta}. The radial parameter is @mymath{\alpha} which is related
-to the FWHM by
+Again, @mymath{a} is constrained by the normalization, therefore two 
parameters define the shape of the Moffat function: @mymath{\alpha} and 
@mymath{\beta}.
+The radial parameter is @mymath{\alpha} which is related to the FWHM by
 
 @cindex Moffat FWHM
 @dispmath{\rm{FWHM}_m=2\alpha\sqrt{2^{1/\beta}-1}}
@@ -16951,36 +15389,29 @@ to the FWHM by
 @cindex Compare Moffat and Gaussian
 @cindex PSF, Moffat compared Gaussian
 @noindent
-Comparing with the PSF predicted from atmospheric turbulence theory
-with a Moffat function, Trujillo et al.@footnote{Trujillo, I.,
-J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The effects
-of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328,
-pp. 977---985.} claim that @mymath{\beta} should be 4.765. They also
-show how the Moffat PSF contains the Gaussian PSF as a limiting case
-when @mymath{\beta\to\infty}.
+Comparing with the PSF predicted from atmospheric turbulence theory with a 
Moffat function, Trujillo et al.@footnote{
+Trujillo, I., J. A. L. Aguerri, J. Cepa, and C. M. Gutierrez (2001). ``The 
effects of seeing on S@'ersic profiles - II. The Moffat PSF''. In: MNRAS 328, 
pp. 977---985.}
+claim that @mymath{\beta} should be 4.765.
+They also show how the Moffat PSF contains the Gaussian PSF as a limiting case 
when @mymath{\beta\to\infty}.
 
 @end table
 
 @item An input FITS image
-An input image file can also be specified to be used as a PSF. If the
-sum of its pixels are not equal to 1, the pixels will be multiplied
-by a fraction so the sum does become 1.
+An input image file can also be specified to be used as a PSF.
+If the sum of its pixels are not equal to 1, the pixels will be multiplied by 
a fraction so the sum does become 1.
 @end table
 
 
-While the Gaussian is only dependent on the FWHM, the Moffat function
-is also dependent on @mymath{\beta}. Comparing these two functions
-with a fixed FWHM gives the following results:
+While the Gaussian is only dependent on the FWHM, the Moffat function is also 
dependent on @mymath{\beta}.
+Comparing these two functions with a fixed FWHM gives the following results:
 
 @itemize
 @item
 Within the FWHM, the functions don't have significant differences.
 @item
-For a fixed FWHM, as @mymath{\beta} increases, the Moffat function
-becomes sharper.
+For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes 
sharper.
 @item
-The Gaussian function is much sharper than the Moffat functions, even
-when @mymath{\beta} is large.
+The Gaussian function is much sharper than the Moffat functions, even when 
@mymath{\beta} is large.
 @end itemize
 
 
@@ -16991,15 +15422,10 @@ when @mymath{\beta} is large.
 
 @cindex Modeling stars
 @cindex Stars, modeling
-In MakeProfiles, stars are generally considered to be a point
-source. This is usually the case for extra galactic studies, were
-nearby stars are also in the field. Since a star is only a point
-source, we assume that it only fills one pixel prior to
-convolution. In fact, exactly for this reason, in astronomical images
-the light profiles of stars are one of the best methods to understand
-the shape of the PSF and a very large fraction of scientific research
-is preformed by assuming the shapes of stars to be the PSF of the
-image.
+In MakeProfiles, stars are generally considered to be a point source.
+This is usually the case for extra galactic studies, were nearby stars are 
also in the field.
+Since a star is only a point source, we assume that it only fills one pixel 
prior to convolution.
+In fact, exactly for this reason, in astronomical images the light profiles of 
stars are one of the best methods to understand the shape of the PSF and a very 
large fraction of scientific research is preformed by assuming the shapes of 
stars to be the PSF of the image.
 
 
 
@@ -17012,9 +15438,7 @@ image.
 @cindex S@'ersic profile
 @cindex Profiles, galaxies
 @cindex Generalized de Vaucouleur profile
-Today, most practitioners agree that the flux of galaxies can be
-modeled with one or a few generalized de Vaucouleur's (or S@'ersic)
-profiles.
+Today, most practitioners agree that the flux of galaxies can be modeled with 
one or a few generalized de Vaucouleur's (or S@'ersic) profiles.
 
 @dispmath{I(r) = I_e \exp \left ( -b_n \left[ \left( r \over r_e \right)^{1/n} 
-1 \right] \right )}
 
@@ -17025,22 +15449,12 @@ profiles.
 @cindex Radius, effective
 @cindex de Vaucouleur profile
 @cindex G@'erard de Vaucouleurs
-G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that
-this function best fits the galaxy light profiles, with the only
-difference that he held @mymath{n} fixed to a value of 4. 20 years
-later in 1968, J. L. S@'ersic showed that @mymath{n} can have a
-variety of values and does not necessarily need to be 4. This profile
-depends on the effective radius (@mymath{r_e}) which is defined as the
-radius which contains half of the profile brightness (see @ref{Profile
-magnitude}). @mymath{I_e} is the flux at the effective radius.  The
-S@'ersic index @mymath{n} is used to define the concentration of the
-profile within @mymath{r_e} and @mymath{b_n} is a constant dependent
-on @mymath{n}. MacArthur et al.@footnote{MacArthur, L. A.,
-S. Courteau, and J. A. Holtzman (2003). ``Structure of Disk-dominated
-Galaxies. I. Bulge/Disk Parameters, Simulations, and Secular
-Evolution''. In: ApJ 582, pp. 689---722.} show that for
-@mymath{n>0.35}, @mymath{b_n} can be accurately approximated using
-this equation:
+G@'erard de Vaucouleurs (1918-1995) was first to show in 1948 that this 
function best fits the galaxy light profiles, with the only difference that he 
held @mymath{n} fixed to a value of 4.
+20 years later in 1968, J. L. S@'ersic showed that @mymath{n} can have a 
variety of values and does not necessarily need to be 4.
+This profile depends on the effective radius (@mymath{r_e}) which is defined 
as the radius which contains half of the profile brightness (see @ref{Profile 
magnitude}).
+@mymath{I_e} is the flux at the effective radius.
+The S@'ersic index @mymath{n} is used to define the concentration of the 
profile within @mymath{r_e} and @mymath{b_n} is a constant dependent on 
@mymath{n}.
+MacArthur et al.@footnote{MacArthur, L. A., S. Courteau, and J. A. Holtzman 
(2003). ``Structure of Disk-dominated Galaxies. I. Bulge/Disk Parameters, 
Simulations, and Secular Evolution''. In: ApJ 582, pp. 689---722.} show that 
for @mymath{n>0.35}, @mymath{b_n} can be accurately approximated using this 
equation:
 
 @dispmath{b_n=2n - {1\over 3} + {4\over 405n} + {46\over 25515n^2} + {131\over 
1148175n^3}-{2194697\over 30690717750n^4}}
 
@@ -17052,115 +15466,75 @@ this equation:
 @subsubsection Sampling from a function
 
 @cindex Sampling
-A pixel is the ultimate level of accuracy to gather data, we can't get
-any more accurate in one image, this is known as sampling in signal
-processing. However, the mathematical profiles which describe our
-models have infinite accuracy. Over a large fraction of the area of
-astrophysically interesting profiles (for example galaxies or PSFs),
-the variation of the profile over the area of one pixel is not too
-significant. In such cases, the elliptical radius (@mymath{r_{el}} of
-the center of the pixel can be assigned as the final value of the
-pixel, see @ref{Defining an ellipse}).
+A pixel is the ultimate level of accuracy to gather data, we can't get any 
more accurate in one image, this is known as sampling in signal processing.
+However, the mathematical profiles which describe our models have infinite 
accuracy.
+Over a large fraction of the area of astrophysically interesting profiles (for 
example galaxies or PSFs), the variation of the profile over the area of one 
pixel is not too significant.
+In such cases, the elliptical radius (@mymath{r_{el}} of the center of the 
pixel can be assigned as the final value of the pixel, see @ref{Defining an 
ellipse}).
 
 @cindex Integration over pixel
 @cindex Gradient over pixel area
 @cindex Function gradient over pixel area
-As you approach their center, some galaxies become very sharp (their
-value significantly changes over one pixel's area). This sharpness
-increases with smaller effective radius and larger S@'ersic values.
-Thus rendering the central value extremely inaccurate. The first
-method that comes to mind for solving this problem is integration. The
-functional form of the profile can be integrated over the pixel area
-in a 2D integration process. However, unfortunately numerical
-integration techniques also have their limitations and when such sharp
-profiles are needed they can become extremely inaccurate.
+As you approach their center, some galaxies become very sharp (their value 
significantly changes over one pixel's area).
+This sharpness increases with smaller effective radius and larger S@'ersic 
values.
+Thus rendering the central value extremely inaccurate.
+The first method that comes to mind for solving this problem is integration.
+The functional form of the profile can be integrated over the pixel area in a 
2D integration process.
+However, unfortunately numerical integration techniques also have their 
limitations and when such sharp profiles are needed they can become extremely 
inaccurate.
 
 @cindex Monte carlo integration
-The most accurate method of sampling a continuous profile on a
-discrete space is by choosing a large number of random points within
-the boundaries of the pixel and taking their average value (or Monte
-Carlo integration). This is also, generally speaking, what happens in
-practice with the photons on the pixel. The number of random points
-can be set with @option{--numrandom}.
-
-Unfortunately, repeating this Monte Carlo process would be extremely time
-and CPU consuming if it is to be applied to every pixel. In order to not
-loose too much accuracy, in MakeProfiles, the profile is built using both
-methods explained below. The building of the profile begins from its
-central pixel and continues (radially) outwards. Monte Carlo integration is
-first applied (which yields @mymath{F_r}), then the central pixel value
-(@mymath{F_c}) is calculated on the same pixel. If the fractional
-difference (@mymath{|F_r-F_c|/F_r}) is lower than a given tolerance level
-(specified with @option{--tolerance}) MakeProfiles will stop using Monte
-Carlo integration and only use the central pixel value.
+The most accurate method of sampling a continuous profile on a discrete space 
is by choosing a large number of random points within the boundaries of the 
pixel and taking their average value (or Monte Carlo integration).
+This is also, generally speaking, what happens in practice with the photons on 
the pixel.
+The number of random points can be set with @option{--numrandom}.
+
+Unfortunately, repeating this Monte Carlo process would be extremely time and 
CPU consuming if it is to be applied to every pixel.
+In order to not loose too much accuracy, in MakeProfiles, the profile is built 
using both methods explained below.
+The building of the profile begins from its central pixel and continues 
(radially) outwards.
+Monte Carlo integration is first applied (which yields @mymath{F_r}), then the 
central pixel value (@mymath{F_c}) is calculated on the same pixel.
+If the fractional difference (@mymath{|F_r-F_c|/F_r}) is lower than a given 
tolerance level (specified with @option{--tolerance}) MakeProfiles will stop 
using Monte Carlo integration and only use the central pixel value.
 
 @cindex Inside-out construction
-The ordering of the pixels in this inside-out construction is based on
-@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see
-@ref{Defining an ellipse}. When the axis ratios are large (near one) this
-is fine. But when they are small and the object is highly elliptical, it
-might seem more reasonable to follow @mymath{r_{el}} not @mymath{r}. The
-problem is that the gradient is stronger in pixels with smaller @mymath{r}
-(and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}. In
-other words, the gradient is strongest along the minor axis. So if the next
-pixel is chosen based on @mymath{r_{el}}, the tolerance level will be
-reached sooner and lots of pixels with large fractional differences will be
-missed.
-
-Monte Carlo integration uses a random number of points. Thus,
-every time you run it, by default, you will get a different
-distribution of points to sample within the pixel. In the case of
-large profiles, this will result in a slight difference of the pixels
-which use Monte Carlo integration each time MakeProfiles is run. To
-have a deterministic result, you have to fix the random number
-generator properties which is used to build the random
-distribution. This can be done by setting the @code{GSL_RNG_TYPE} and
-@code{GSL_RNG_SEED} environment variables and calling MakeProfiles
-with the @option{--envseed} option. To learn more about the process of
-generating random numbers, see @ref{Generating random numbers}.
+The ordering of the pixels in this inside-out construction is based on 
@mymath{r=\sqrt{(i_c-i)^2+(j_c-j)^2}}, not @mymath{r_{el}}, see @ref{Defining 
an ellipse}.
+When the axis ratios are large (near one) this is fine.
+But when they are small and the object is highly elliptical, it might seem 
more reasonable to follow @mymath{r_{el}} not @mymath{r}.
+The problem is that the gradient is stronger in pixels with smaller @mymath{r} 
(and larger @mymath{r_{el}}) than those with smaller @mymath{r_{el}}.
+In other words, the gradient is strongest along the minor axis.
+So if the next pixel is chosen based on @mymath{r_{el}}, the tolerance level 
will be reached sooner and lots of pixels with large fractional differences 
will be missed.
+
+Monte Carlo integration uses a random number of points.
+Thus, every time you run it, by default, you will get a different distribution 
of points to sample within the pixel.
+In the case of large profiles, this will result in a slight difference of the 
pixels which use Monte Carlo integration each time MakeProfiles is run.
+To have a deterministic result, you have to fix the random number generator 
properties which is used to build the random distribution.
+This can be done by setting the @code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED} 
environment variables and calling MakeProfiles with the @option{--envseed} 
option.
+To learn more about the process of generating random numbers, see 
@ref{Generating random numbers}.
 
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-The seed values are fixed for every profile: with @option{--envseed},
-all the profiles have the same seed and without it, each will get a
-different seed using the system clock (which is accurate to within one
-microsecond). The same seed will be used to generate a random number
-for all the sub-pixel positions of all the profiles. So in the former,
-the sub-pixel points checked for all the pixels undergoing Monte carlo
-integration in all profiles will be identical. In other words, the
-sub-pixel points in the first (closest to the center) pixel of all the
-profiles will be identical with each other. All the second pixels
-studied for all the profiles will also receive an identical (different
-from the first pixel) set of sub-pixel points and so on. As long as
-the number of random points used is large enough or the profiles are
-not identical, this should not cause any systematic bias.
+The seed values are fixed for every profile: with @option{--envseed}, all the 
profiles have the same seed and without it, each will get a different seed 
using the system clock (which is accurate to within one microsecond).
+The same seed will be used to generate a random number for all the sub-pixel 
positions of all the profiles.
+So in the former, the sub-pixel points checked for all the pixels undergoing 
Monte carlo integration in all profiles will be identical.
+In other words, the sub-pixel points in the first (closest to the center) 
pixel of all the profiles will be identical with each other.
+All the second pixels studied for all the profiles will also receive an 
identical (different from the first pixel) set of sub-pixel points and so on.
+As long as the number of random points used is large enough or the profiles 
are not identical, this should not cause any systematic bias.
 
 
 @node Oversampling,  , Sampling from a function, Modeling basics
 @subsubsection Oversampling
 
 @cindex Oversampling
-The steps explained in @ref{Sampling from a function} do give an
-accurate representation of a profile prior to convolution. However, in
-an actual observation, the image is first convolved with or blurred by
-the atmospheric and instrument PSF in a continuous space and then it
-is sampled on the discrete pixels of the camera.
+The steps explained in @ref{Sampling from a function} do give an accurate 
representation of a profile prior to convolution.
+However, in an actual observation, the image is first convolved with or 
blurred by the atmospheric and instrument PSF in a continuous space and then it 
is sampled on the discrete pixels of the camera.
 
 @cindex PSF over-sample
-In order to more accurately simulate this process, the unconvolved image
-and the PSF are created on a finer pixel grid. In other words, the output
-image is a certain odd-integer multiple of the desired size, we can call
-this `oversampling'. The user can specify this multiple as a command-line
-option. The reason this has to be an odd number is that the PSF has to be
-centered on the center of its image. An image with an even number of pixels
-on each side does not have a central pixel.
+In order to more accurately simulate this process, the unconvolved image and 
the PSF are created on a finer pixel grid.
+In other words, the output image is a certain odd-integer multiple of the 
desired size, we can call this `oversampling'.
+The user can specify this multiple as a command-line option.
+The reason this has to be an odd number is that the PSF has to be centered on 
the center of its image.
+An image with an even number of pixels on each side does not have a central 
pixel.
 
-The image can then be convolved with the PSF (which should also be
-oversampled on the same scale). Finally, image can be sub-sampled to
-get to the initial desired pixel size of the output image. After this,
-mock noise can be added as explained in the next section. This is
-because unlike the PSF, the noise occurs in each output pixel, not on
-a continuous space like all the prior steps.
+The image can then be convolved with the PSF (which should also be oversampled 
on the same scale).
+Finally, image can be sub-sampled to get to the initial desired pixel size of 
the output image.
+After this, mock noise can be added as explained in the next section.
+This is because unlike the PSF, the noise occurs in each output pixel, not on 
a continuous space like all the prior steps.
 
 
 
@@ -17168,24 +15542,15 @@ a continuous space like all the prior steps.
 @node If convolving afterwards, Flux Brightness and magnitude, Modeling 
basics, MakeProfiles
 @subsection If convolving afterwards
 
-In case you want to convolve the image later with a given point spread
-function, make sure to use a larger image size. After convolution, the
-profiles become larger and a profile that is normally completely
-outside of the image might fall within it.
+In case you want to convolve the image later with a given point spread 
function, make sure to use a larger image size.
+After convolution, the profiles become larger and a profile that is normally 
completely outside of the image might fall within it.
 
-On one axis, if you want your final (convolved) image to be @mymath{m}
-pixels and your PSF is @mymath{2n+1} pixels wide, then when calling
-MakeProfiles, set the axis size to @mymath{m+2n}, not @mymath{m}. You
-also have to shift all the pixel positions of the profile centers on
-the that axis by @mymath{n} pixels to the positive.
+On one axis, if you want your final (convolved) image to be @mymath{m} pixels 
and your PSF is @mymath{2n+1} pixels wide, then when calling MakeProfiles, set 
the axis size to @mymath{m+2n}, not @mymath{m}.
+You also have to shift all the pixel positions of the profile centers on the 
that axis by @mymath{n} pixels to the positive.
 
-After convolution, you can crop the outer @mymath{n} pixels with the
-section crop box specification of Crop: @option{--section=n:*-n,n:*-n}
-assuming your PSF is a square, see @ref{Crop section syntax}. This will
-also remove all discrete Fourier transform artifacts (blurred sides) from
-the final image. To facilitate this shift, MakeProfiles has the options
-@option{--xshift}, @option{--yshift} and @option{--prepforconv}, see
-@ref{Invoking astmkprof}.
+After convolution, you can crop the outer @mymath{n} pixels with the section 
crop box specification of Crop: @option{--section=n:*-n,n:*-n} assuming your 
PSF is a square, see @ref{Crop section syntax}.
+This will also remove all discrete Fourier transform artifacts (blurred sides) 
from the final image.
+To facilitate this shift, MakeProfiles has the options @option{--xshift}, 
@option{--yshift} and @option{--prepforconv}, see @ref{Invoking astmkprof}.
 
 
 
@@ -17196,75 +15561,44 @@ the final image. To facilitate this shift, 
MakeProfiles has the options
 @cindex ADU
 @cindex Gain
 @cindex Counts
-Astronomical data pixels are usually in units of
-counts@footnote{Counts are also known as analog to digital units
-(ADU).} or electrons or either one divided by seconds. To convert from
-the counts to electrons, you will need to know the instrument gain. In
-any case, they can be directly converted to energy or energy/time
-using the basic hardware (telescope, camera and filter)
-information. We will continue the discussion assuming the pixels are
-in units of energy/time.
+Astronomical data pixels are usually in units of counts@footnote{Counts are 
also known as analog to digital units (ADU).} or electrons or either one 
divided by seconds.
+To convert from the counts to electrons, you will need to know the instrument 
gain.
+In any case, they can be directly converted to energy or energy/time using the 
basic hardware (telescope, camera and filter) information.
+We will continue the discussion assuming the pixels are in units of 
energy/time.
 
 @cindex Flux
 @cindex Luminosity
 @cindex Brightness
-The @emph{brightness} of an object is defined as its total detected
-energy per time. This is simply the sum of the pixels that are
-associated with that detection by our detection tool for example
-@ref{NoiseChisel}@footnote{If further processing is done, for example
-the Kron or Petrosian radii are calculated, then the detected area is
-not sufficient and the total area that was within the respective
-radius must be used.}. The @emph{flux} of an object is in units of
-energy/time/area and for a detected object, it is defined as its
-brightness divided by the area used to collect the light from the
-source or the telescope aperture (for example in
-@mymath{cm^2})@footnote{For a full object that spans over several
-pixels, the telescope area should be used to find the flux. However,
-sometimes, only the brightness per pixel is desired. In such cases
-this book also @emph{loosely} uses the term flux. This is only
-approximately accurate however, since while all the pixels have a
-fixed area, the pixel size can vary with camera on the
-telescope.}. Knowing the flux (@mymath{f}) and distance to the object
-(@mymath{r}), we can calculate its @emph{luminosity}:
-@mymath{L=4{\pi}r^2f}. Therefore, flux and luminosity are intrinsic
-properties of the object, while brightness depends on our detecting
-tools (hardware and software). Here we will not be discussing
-luminosity, but brightness. However, since luminosity is the
-astrophysically interesting quantity, we also defined it here to avoid
-possible confusion between these two terms because they both have the
-same units.
+The @emph{brightness} of an object is defined as its total detected energy per 
time.
+This is simply the sum of the pixels that are associated with that detection 
by our detection tool for example @ref{NoiseChisel}@footnote{If further 
processing is done, for example the Kron or Petrosian radii are calculated, 
then the detected area is not sufficient and the total area that was within the 
respective radius must be used.}.
+The @emph{flux} of an object is in units of energy/time/area and for a 
detected object, it is defined as its brightness divided by the area used to 
collect the light from the source or the telescope aperture (for example in 
@mymath{cm^2})@footnote{For a full object that spans over several pixels, the 
telescope area should be used to find the flux.
+However, sometimes, only the brightness per pixel is desired.
+In such cases this book also @emph{loosely} uses the term flux.
+This is only approximately accurate however, since while all the pixels have a 
fixed area, the pixel size can vary with camera on the telescope.}.
+Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can 
calculate its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
+Therefore, flux and luminosity are intrinsic properties of the object, while 
brightness depends on our detecting tools (hardware and software).
+Here we will not be discussing luminosity, but brightness.
+However, since luminosity is the astrophysically interesting quantity, we also 
defined it here to avoid possible confusion between these two terms because 
they both have the same units.
 
 @cindex Magnitude zero-point
 @cindex Magnitudes from flux
 @cindex Flux to magnitude conversion
 @cindex Astronomical Magnitude system
-Images of astronomical objects span over a very large range of
-brightness. With the Sun (as the brightest object) being roughly
-@mymath{2.5^{60}=10^{24}} times brighter than the faintest galaxies we
-can currently detect. Therefore discussing brightness will be very
-hard, and astronomers have chosen to use a logarithmic scale to talk
-about the brightness of astronomical objects. But the logarithm can
-only be usable with a unit-less and always positive value. Fortunately
-brightness is always positive and to remove the units we divide the
-brightness of the object (@mymath{B}) by a reference brightness
-(@mymath{B_r}). We then define the resulting logarithmic scale as
-@mymath{magnitude} through the following relation@footnote{The
-@mymath{-2.5} factor in the definition of magnitudes is a legacy of
-the our ancient colleagues and in particular Hipparchus of Nicaea
-(190-120 BC).}
+Images of astronomical objects span over a very large range of brightness.
+With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}} 
times brighter than the faintest galaxies we can currently detect.
+Therefore discussing brightness will be very hard, and astronomers have chosen 
to use a logarithmic scale to talk about the brightness of astronomical objects.
+But the logarithm can only be usable with a unit-less and always positive 
value.
+Fortunately brightness is always positive and to remove the units we divide 
the brightness of the object (@mymath{B}) by a reference brightness 
(@mymath{B_r}).
+We then define the resulting logarithmic scale as @mymath{magnitude} through 
the following relation@footnote{The @mymath{-2.5} factor in the definition of 
magnitudes is a legacy of the our ancient colleagues and in particular 
Hipparchus of Nicaea (190-120 BC).}
 
 @dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
 
 @cindex Zero-point magnitude
 @noindent
-@mymath{m} is defined as the magnitude of the object and @mymath{m_r}
-is the pre-defined magnitude of the reference brightness. One
-particularly easy condition is when @mymath{B_r=1}. This will allow us
-to summarize all the hardware specific parameters discussed above into
-one number as the reference magnitude which is commonly known as the
-Zero-point@footnote{When @mymath{B=Br=1}, the right side of the
-magnitude definition will be zero. Hence the name, ``zero-point''.}
-magnitude.
+@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the 
pre-defined magnitude of the reference brightness.
+One particularly easy condition is when @mymath{B_r=1}.
+This will allow us to summarize all the hardware specific parameters discussed 
above into one number as the reference magnitude which is commonly known as the 
Zero-point@footnote{When @mymath{B=Br=1}, the right side of the magnitude 
definition will be zero.
+Hence the name, ``zero-point''.} magnitude.
 
 
 @node Profile magnitude, Invoking astmkprof, Flux Brightness and magnitude, 
MakeProfiles
@@ -17273,35 +15607,20 @@ magnitude.
 @cindex Brightness
 @cindex Truncation radius
 @cindex Sum for total flux
-To find the profile brightness or its magnitude, (see @ref{Flux
-Brightness and magnitude}), it is customary to use the 2D integration
-of the flux to infinity. However, in MakeProfiles we do not follow
-this idealistic approach and apply a more realistic method to find the
-total brightness or magnitude: the sum of all the pixels belonging to
-a profile within its predefined truncation radius. Note that if the
-truncation radius is not large enough, this can be significantly
-different from the total integrated light to infinity.
+To find the profile brightness or its magnitude, (see @ref{Flux Brightness and 
magnitude}), it is customary to use the 2D integration of the flux to infinity.
+However, in MakeProfiles we do not follow this idealistic approach and apply a 
more realistic method to find the total brightness or magnitude: the sum of all 
the pixels belonging to a profile within its predefined truncation radius.
+Note that if the truncation radius is not large enough, this can be 
significantly different from the total integrated light to infinity.
 
 @cindex Integration to infinity
-An integration to infinity is not a realistic condition because no
-galaxy extends indefinitely (important for high S@'ersic index
-profiles), pixelation can also cause a significant difference between
-the actual total pixel sum value of the profile and that of
-integration to infinity, especially in small and high S@'ersic index
-profiles. To be safe, you can specify a large enough truncation radius
-for such compact high S@'ersic index profiles.
+An integration to infinity is not a realistic condition because no galaxy 
extends indefinitely (important for high S@'ersic index profiles), pixelation 
can also cause a significant difference between the actual total pixel sum 
value of the profile and that of integration to infinity, especially in small 
and high S@'ersic index profiles.
+To be safe, you can specify a large enough truncation radius for such compact 
high S@'ersic index profiles.
 
-If oversampling is used then the brightness is calculated using the
-over-sampled image, see @ref{Oversampling} which is much more
-accurate. The profile is first built in an array completely bounding
-it with a normalization constant of unity (see @ref{Galaxies}). Taking
-@mymath{B} to be the desired brightness and @mymath{S} to be the sum
-of the pixels in the created profile, every pixel is then multiplied
-by @mymath{B/S} so the sum is exactly @mymath{B}.
+If oversampling is used then the brightness is calculated using the 
over-sampled image, see @ref{Oversampling} which is much more accurate.
+The profile is first built in an array completely bounding it with a 
normalization constant of unity (see @ref{Galaxies}).
+Taking @mymath{B} to be the desired brightness and @mymath{S} to be the sum of 
the pixels in the created profile, every pixel is then multiplied by 
@mymath{B/S} so the sum is exactly @mymath{B}.
 
-If the @option{--individual} option is called, this same array is
-written to a FITS file. If not, only the overlapping pixels of this
-array and the output image are kept and added to the output array.
+If the @option{--individual} option is called, this same array is written to a 
FITS file.
+If not, only the overlapping pixels of this array and the output image are 
kept and added to the output array.
 
 
 
@@ -17311,9 +15630,8 @@ array and the output image are kept and added to the 
output array.
 @node Invoking astmkprof,  , Profile magnitude, MakeProfiles
 @subsection Invoking MakeProfiles
 
-MakeProfiles will make any number of profiles specified in a catalog either
-individually or in one image. The executable name is @file{astmkprof} with
-the following general template
+MakeProfiles will make any number of profiles specified in a catalog either 
individually or in one image.
+The executable name is @file{astmkprof} with the following general template
 
 @example
 $ astmkprof [OPTION ...] [Catalog]
@@ -17341,46 +15659,23 @@ $ astmkprof --individual --oversample 3 
--mergedsize=500,500 cat.txt
 @end example
 
 @noindent
-The parameters of the mock profiles can either be given through a catalog
-(which stores the parameters of many mock profiles, see @ref{MakeProfiles
-catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output
-dataset}). The catalog can be in the FITS ASCII, FITS binary format, or
-plain text formats (see @ref{Tables}). A plain text catalog can also be
-provided using the Standard input (see @ref{Standard input}). The columns
-related to each parameter can be determined both by number, or by
-match/search criteria using the column names, units, or comments. with the
-options ending in @option{col}, see below.
-
-Without any file given to the @option{--background} option, MakeProfiles
-will make a zero-valued image and build the profiles on that (its size and
-main WCS parameters can also be defined through the options described in
-@ref{MakeProfiles output dataset}). Besides the main/merged image
-containing all the profiles in the catalog, it is also possible to build
-individual images for each profile (only enclosing one full profile to its
-truncation radius) with the @option{--individual} option.
-
-If an image is given to the @option{--background} option, the pixels of
-that image are used as the background value for every pixel. The flux value
-of each profile pixel will be added to the pixel in that background
-value. In this case, the values to all options relating to the output size
-and WCS will be ignored if specified (for example @option{--oversample},
-@option{--mergedsize}, and @option{--prepforconv}) on the command-line or
-in the configuration files.
-
-The sections below discuss the options specific to MakeProfiles based on
-context: the input catalog settings which can have many rows for different
-profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles
-profile settings}, we discuss how you can set general profile settings
-(that are the same for all the profiles in the catalog). Finally
-@ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} discuss
-the outputs of MakeProfiles and how you can configure them. Besides these,
-MakeProfiles also supports all the common Gnuastro program options that are
-discussed in @ref{Common options}, so please flip through them is well for
-a more comfortable usage.
-
-Please see @ref{Sufi simulates a detection} for a very complete tutorial
-explaining how one could use MakeProfiles in conjunction with other
-Gnuastro's programs to make a complete simulated image of a mock galaxy.
+The parameters of the mock profiles can either be given through a catalog 
(which stores the parameters of many mock profiles, see @ref{MakeProfiles 
catalog}), or the @option{--kernel} option (see @ref{MakeProfiles output 
dataset}).
+The catalog can be in the FITS ASCII, FITS binary format, or plain text 
formats (see @ref{Tables}).
+A plain text catalog can also be provided using the Standard input (see 
@ref{Standard input}).
+The columns related to each parameter can be determined both by number, or by 
match/search criteria using the column names, units, or comments, with the 
options ending in @option{col}, see below.
+
+Without any file given to the @option{--background} option, MakeProfiles will 
make a zero-valued image and build the profiles on that (its size and main WCS 
parameters can also be defined through the options described in 
@ref{MakeProfiles output dataset}).
+Besides the main/merged image containing all the profiles in the catalog, it 
is also possible to build individual images for each profile (only enclosing 
one full profile to its truncation radius) with the @option{--individual} 
option.
+
+If an image is given to the @option{--background} option, the pixels of that 
image are used as the background value for every pixel.
+The flux value of each profile pixel will be added to the pixel in that 
background value.
+In this case, the values to all options relating to the output size and WCS 
will be ignored if specified (for example @option{--oversample}, 
@option{--mergedsize}, and @option{--prepforconv}) on the command-line or in 
the configuration files.
+
+The sections below discuss the options specific to MakeProfiles based on 
context: the input catalog settings which can have many rows for different 
profiles are discussed in @ref{MakeProfiles catalog}, in @ref{MakeProfiles 
profile settings}, we discuss how you can set general profile settings (that 
are the same for all the profiles in the catalog).
+Finally @ref{MakeProfiles output dataset} and @ref{MakeProfiles log file} 
discuss the outputs of MakeProfiles and how you can configure them.
+Besides these, MakeProfiles also supports all the common Gnuastro program 
options that are discussed in @ref{Common options}, so please flip through them 
is well for a more comfortable usage.
+
+Please see @ref{Sufi simulates a detection} for a very complete tutorial 
explaining how one could use MakeProfiles in conjunction with other Gnuastro's 
programs to make a complete simulated image of a mock galaxy.
 
 @menu
 * MakeProfiles catalog::        Required catalog properties.
@@ -17391,75 +15686,48 @@ Gnuastro's programs to make a complete simulated 
image of a mock galaxy.
 
 @node MakeProfiles catalog, MakeProfiles profile settings, Invoking astmkprof, 
Invoking astmkprof
 @subsubsection MakeProfiles catalog
-The catalog containing information about each profile can be in the FITS
-ASCII, FITS binary, or plain text formats (see @ref{Tables}). The latter
-can also be provided using standard input (see @ref{Standard input}). Its
-columns can be ordered in any desired manner. You can specify which columns
-belong to which parameters using the set of options discussed below. For
-example through the @option{--rcol} and @option{--tcol} options, you can
-specify the column that contains the radial parameter for each profile and
-its truncation respectively. See @ref{Selecting table columns} for a
-thorough discussion on the values to these options.
-
-The value for the profile center in the catalog (the @option{--ccol}
-option) can be a floating point number so the profile center can be on any
-sub-pixel position. Note that pixel positions in the FITS standard start
-from 1 and an integer is the pixel center. So a 2D image actually starts
-from the position (0.5, 0.5), which is the bottom-left corner of the first
-pixel. When a @option{--background} image with WCS information is provided
-or you specify the WCS parameters with the respective options, you may also
-use RA and Dec to identify the center of each profile (see the
-@option{--mode} option below).
-
-In MakeProfiles, profile centers do not have to be in (overlap with) the
-final image.  Even if only one pixel of the profile within the truncation
-radius overlaps with the final image size, the profile is built and
-included in the final image image. Profiles that are completely out of the
-image will not be created (unless you explicitly ask for it with the
-@option{--individual} option). You can use the output log file (created
-with @option{--log} to see which profiles were within the image, see
-@ref{Common options}.
-
-If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and
-the profiles are to be built in one image (when @option{--individual} is
-not used), it is assumed they are the PSF(s) you want to convolve your
-created image with. So by default, they will not be built in the output
-image but as separate files. The sum of pixels of these separate files will
-also be set to unity (1) so you are ready to convolve, see @ref{Convolution
-process}. As a summary, the position and magnitude of PSF profile will be
-ignored. This behavior can be disabled with the @option{--psfinimg}
-option. If you want to create all the profiles separately (with
-@option{--individual}) and you want the sum of the PSF profile pixels to be
-unity, you have to set their magnitudes in the catalog to the zero-point
-magnitude and be sure that the central positions of the profiles don't have
-any fractional part (the PSF center has to be in the center of the pixel).
-
-The list of options directly related to the input catalog columns is shown
-below.
+The catalog containing information about each profile can be in the FITS 
ASCII, FITS binary, or plain text formats (see @ref{Tables}).
+The latter can also be provided using standard input (see @ref{Standard 
input}).
+Its columns can be ordered in any desired manner.
+You can specify which columns belong to which parameters using the set of 
options discussed below.
+For example through the @option{--rcol} and @option{--tcol} options, you can 
specify the column that contains the radial parameter for each profile and its 
truncation respectively.
+See @ref{Selecting table columns} for a thorough discussion on the values to 
these options.
+
+The value for the profile center in the catalog (the @option{--ccol} option) 
can be a floating point number so the profile center can be on any sub-pixel 
position.
+Note that pixel positions in the FITS standard start from 1 and an integer is 
the pixel center.
+So a 2D image actually starts from the position (0.5, 0.5), which is the 
bottom-left corner of the first pixel.
+When a @option{--background} image with WCS information is provided or you 
specify the WCS parameters with the respective options, you may also use RA and 
Dec to identify the center of each profile (see the @option{--mode} option 
below).
+
+In MakeProfiles, profile centers do not have to be in (overlap with) the final 
image.
+Even if only one pixel of the profile within the truncation radius overlaps 
with the final image size, the profile is built and included in the final image 
image.
+Profiles that are completely out of the image will not be created (unless you 
explicitly ask for it with the @option{--individual} option).
+You can use the output log file (created with @option{--log} to see which 
profiles were within the image, see @ref{Common options}.
+
+If PSF profiles (Moffat or Gaussian, see @ref{PSF}) are in the catalog and the 
profiles are to be built in one image (when @option{--individual} is not used), 
it is assumed they are the PSF(s) you want to convolve your created image with.
+So by default, they will not be built in the output image but as separate 
files.
+The sum of pixels of these separate files will also be set to unity (1) so you 
are ready to convolve, see @ref{Convolution process}.
+As a summary, the position and magnitude of PSF profile will be ignored.
+This behavior can be disabled with the @option{--psfinimg} option.
+If you want to create all the profiles separately (with @option{--individual}) 
and you want the sum of the PSF profile pixels to be unity, you have to set 
their magnitudes in the catalog to the zero-point magnitude and be sure that 
the central positions of the profiles don't have any fractional part (the PSF 
center has to be in the center of the pixel).
+
+The list of options directly related to the input catalog columns is shown 
below.
 
 @table @option
 
 @item --ccol=STR/INT
-Center coordinate column for each dimension. This option must be called two
-times to define the center coordinates in an image. For example
-@option{--ccol=RA} and @option{--ccol=DEC} (along with @option{--mode=wcs})
-will inform MakeProfiles to look into the catalog columns named @option{RA}
-and @option{DEC} for the Right Ascension and Declination of the profile
-centers.
+Center coordinate column for each dimension.
+This option must be called two times to define the center coordinates in an 
image.
+For example @option{--ccol=RA} and @option{--ccol=DEC} (along with 
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns 
named @option{RA} and @option{DEC} for the Right Ascension and Declination of 
the profile centers.
 
 @item --fcol=INT/STR
-The functional form of the profile with one of the values below depending
-on the desired profile. The column can contain either the numeric codes
-(for example `@code{1}') or string characters (for example
-`@code{sersic}'). The numeric codes are easier to use in scripts which
-generate catalogs with hundreds or thousands of profiles.
-
-The string format can be easier when the catalog is to be written/checked
-by hand/eye before running MakeProfiles. It is much more readable and
-provides a level of documentation. All Gnuastro's recognized table formats
-(see @ref{Recognized table formats}) accept string type columns. To have
-string columns in a plain text table/catalog, see @ref{Gnuastro text table
-format}.
+The functional form of the profile with one of the values below depending on 
the desired profile.
+The column can contain either the numeric codes (for example `@code{1}') or 
string characters (for example `@code{sersic}').
+The numeric codes are easier to use in scripts which generate catalogs with 
hundreds or thousands of profiles.
+
+The string format can be easier when the catalog is to be written/checked by 
hand/eye before running MakeProfiles.
+It is much more readable and provides a level of documentation.
+All Gnuastro's recognized table formats (see @ref{Recognized table formats}) 
accept string type columns.
+To have string columns in a plain text table/catalog, see @ref{Gnuastro text 
table format}.
 
 @itemize
 @item
@@ -17478,196 +15746,136 @@ Point source with `@code{point}' or `@code{4}'.
 Flat profile with `@code{flat}' or `@code{5}'.
 
 @item
-Circumference profile with `@code{circum}' or `@code{6}'. A fixed value
-will be used for all pixels less than or equal to the truncation radius
-(@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value to
-the @option{--circumwidth}).
+Circumference profile with `@code{circum}' or `@code{6}'.
+A fixed value will be used for all pixels less than or equal to the truncation 
radius (@mymath{r_t}) and greater than @mymath{r_t-w} (@mymath{w} is the value 
to the @option{--circumwidth}).
 
 @item
-Radial distance profile with `@code{distance}' or `@code{7}'. At the lowest
-level, each pixel only has an elliptical radial distance given the
-profile's shape and orientation (see @ref{Defining an ellipse}). When this
-profile is chosen, the pixel's elliptical radial distance from the profile
-center is written as its value. For this profile, the value in the
-magnitude column (@option{--mcol}) will be ignored.
-
-You can use this for checks or as a first approximation to define your own
-higher-level radial function. In the latter case, just note that the
-central values are going to be incorrect (see @ref{Sampling from a
-function}).
+Radial distance profile with `@code{distance}' or `@code{7}'.
+At the lowest level, each pixel only has an elliptical radial distance given 
the profile's shape and orientation (see @ref{Defining an ellipse}).
+When this profile is chosen, the pixel's elliptical radial distance from the 
profile center is written as its value.
+For this profile, the value in the magnitude column (@option{--mcol}) will be 
ignored.
+
+You can use this for checks or as a first approximation to define your own 
higher-level radial function.
+In the latter case, just note that the central values are going to be 
incorrect (see @ref{Sampling from a function}).
 @end itemize
 
 @item --rcol=STR/INT
-The radius parameter of the profiles. Effective radius (@mymath{r_e}) if
-S@'ersic, FWHM if Moffat or Gaussian.
+The radius parameter of the profiles.
+Effective radius (@mymath{r_e}) if S@'ersic, FWHM if Moffat or Gaussian.
 
 @item --ncol=STR/INT
 The S@'ersic index (@mymath{n}) or Moffat @mymath{\beta}.
 
 @item --pcol=STR/INT
-The position angle (in degrees) of the profiles relative to the first FITS
-axis (horizontal when viewed in SAO ds9).
+The position angle (in degrees) of the profiles relative to the first FITS 
axis (horizontal when viewed in SAO ds9).
 
 @item --qcol=STR/INT
-The axis ratio of the profiles (minor axis divided by the major axis in a
-2D ellipse).
+The axis ratio of the profiles (minor axis divided by the major axis in a 2D 
ellipse).
 
 @item --mcol=STR/INT
-The total pixelated magnitude of the profile within the truncation radius,
-see @ref{Profile magnitude}.
+The total pixelated magnitude of the profile within the truncation radius, see 
@ref{Profile magnitude}.
 
 @item --tcol=STR/INT
-The truncation radius of this profile. By default it is in units of the
-radial parameter of the profile (the value in the @option{--rcol} of the
-catalog). If @option{--tunitinp} is given, this value is interpreted in
-units of pixels (prior to oversampling) irrespective of the profile.
+The truncation radius of this profile.
+By default it is in units of the radial parameter of the profile (the value in 
the @option{--rcol} of the catalog).
+If @option{--tunitinp} is given, this value is interpreted in units of pixels 
(prior to oversampling) irrespective of the profile.
 
 @end table
 
 @node MakeProfiles profile settings, MakeProfiles output dataset, MakeProfiles 
catalog, Invoking astmkprof
 @subsubsection MakeProfiles profile settings
 
-The profile parameters that differ between each created profile are
-specified through the columns in the input catalog and described in
-@ref{MakeProfiles catalog}. Besides those there are general settings for
-some profiles that don't differ between one profile and another, they are a
-property of the general process. For example how many random points to use
-in the monte-carlo integration, this value is fixed for all the
-profiles. The options described in this section are for configuring such
-properties.
+The profile parameters that differ between each created profile are specified 
through the columns in the input catalog and described in @ref{MakeProfiles 
catalog}.
+Besides those there are general settings for some profiles that don't differ 
between one profile and another, they are a property of the general process.
+For example how many random points to use in the monte-carlo integration, this 
value is fixed for all the profiles.
+The options described in this section are for configuring such properties.
 
 @table @option
 
 @item --mode=STR
-Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles
-catalog}) in image or WCS coordinates. This option thus accepts only two
-values: @option{img} and @option{wcs}. It is mandatory when a catalog is
-being used as input.
+Interpret the center position columns (@option{--ccol} in @ref{MakeProfiles 
catalog}) in image or WCS coordinates.
+This option thus accepts only two values: @option{img} and @option{wcs}.
+It is mandatory when a catalog is being used as input.
 
 @item -r
 @itemx --numrandom
-The number of random points used in the central regions of the
-profile, see @ref{Sampling from a function}.
+The number of random points used in the central regions of the profile, see 
@ref{Sampling from a function}.
 
 @item -e
 @itemx --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Use the value to the @code{GSL_RNG_SEED} environment variable to
-generate the random Monte Carlo sampling distribution, see
-@ref{Sampling from a function} and @ref{Generating random
-numbers}.
+Use the value to the @code{GSL_RNG_SEED} environment variable to generate the 
random Monte Carlo sampling distribution, see @ref{Sampling from a function} 
and @ref{Generating random numbers}.
 
 @item -t FLT
 @itemx --tolerance=FLT
-The tolerance to switch from Monte Carlo integration to the central pixel
-value, see @ref{Sampling from a function}.
+The tolerance to switch from Monte Carlo integration to the central pixel 
value, see @ref{Sampling from a function}.
 
 @item -p
 @itemx --tunitinp
-The truncation column of the catalog is in units of pixels. By
-default, the truncation column is considered to be in units of the
-radial parameters of the profile (@option{--rcol}). Read it as
-`t-unit-in-p' for `truncation unit in pixels'.
+The truncation column of the catalog is in units of pixels.
+By default, the truncation column is considered to be in units of the radial 
parameters of the profile (@option{--rcol}).
+Read it as `t-unit-in-p' for `truncation unit in pixels'.
 
 @item -f
 @itemx --mforflatpix
-When making fixed value profiles (flat and circumference, see
-`@option{--fcol}'), don't use the value in the column specified by
-`@option{--mcol}' as the magnitude. Instead use it as the exact value that
-all the pixels of these profiles should have. This option is irrelevant for
-other types of profiles. This option is very useful for creating masks, or
-labeled regions in an image. Any integer, or floating point value can used
-in this column with this option, including @code{NaN} (or `@code{nan}', or
-`@code{NAN}', case is irrelevant), and infinities (@code{inf}, @code{-inf},
-or @code{+inf}).
-
-For example, with this option if you set the value in the magnitude column
-(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular
-mask over an image (which can be given as the argument), see @ref{Blank
-pixels}. Another useful application of this option is to create labeled
-elliptical or circular apertures in an image. To do this, set the value in
-the magnitude column to the label you want for this profile. This labeled
-image can then be used in combination with NoiseChisel's output (see
-@ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see
-@ref{MakeCatalog}).
-
-Alternatively, if you want to mark regions of the image (for example with
-an elliptical circumference) and you don't want to use NaN values (as
-explained above) for some technical reason, you can get the minimum or
-maximum value in the image @footnote{The minimum will give a better result,
-because the maximum can be too high compared to most pixels in the image,
-making it harder to display.} using Arithmetic (see @ref{Arithmetic}), then
-use that value in the magnitude column along with this option for all the
-profiles.
-
-Please note that when using MakeProfiles on an already existing image, you
-have to set `@option{--oversample=1}'. Otherwise all the profiles will be
-scaled up based on the oversampling scale in your configuration files (see
-@ref{Configuration files}) unless you have accounted for oversampling in
-your catalog.
+When making fixed value profiles (flat and circumference, see 
`@option{--fcol}'), don't use the value in the column specified by 
`@option{--mcol}' as the magnitude.
+Instead use it as the exact value that all the pixels of these profiles should 
have.
+This option is irrelevant for other types of profiles.
+This option is very useful for creating masks, or labeled regions in an image.
+Any integer, or floating point value can used in this column with this option, 
including @code{NaN} (or `@code{nan}', or `@code{NAN}', case is irrelevant), 
and infinities (@code{inf}, @code{-inf}, or @code{+inf}).
+
+For example, with this option if you set the value in the magnitude column 
(@option{--mcol}) to @code{NaN}, you can create an elliptical or circular mask 
over an image (which can be given as the argument), see @ref{Blank pixels}.
+Another useful application of this option is to create labeled elliptical or 
circular apertures in an image.
+To do this, set the value in the magnitude column to the label you want for 
this profile.
+This labeled image can then be used in combination with NoiseChisel's output 
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see 
@ref{MakeCatalog}).
+
+Alternatively, if you want to mark regions of the image (for example with an 
elliptical circumference) and you don't want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
+The minimum will give a better result, because the maximum can be too high 
compared to most pixels in the image, making it harder to display.}
+using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude 
column along with this option for all the profiles.
+
+Please note that when using MakeProfiles on an already existing image, you 
have to set `@option{--oversample=1}'.
+Otherwise all the profiles will be scaled up based on the oversampling scale 
in your configuration files (see @ref{Configuration files}) unless you have 
accounted for oversampling in your catalog.
 
 @item --mcolisbrightness
-The value given in the ``magnitude column'' (specified by @option{--mcol},
-see @ref{MakeProfiles catalog}) must be interpreted as brightness, not
-magnitude. The zeropoint magnitude (value to the @option{--zeropoint}
-option) is ignored and the given value must have the same units as the
-input dataset's pixels.
-
-Recall that the total profile magnitude or brightness that is specified
-with in the @option{--mcol} column of the input catalog is not an
-integration to infinity, but the actual sum of pixels in the profile (until
-the desired truncation radius). See @ref{Profile magnitude} for more on
-this point.
+The value given in the ``magnitude column'' (specified by @option{--mcol}, see 
@ref{MakeProfiles catalog}) must be interpreted as brightness, not magnitude.
+The zeropoint magnitude (value to the @option{--zeropoint} option) is ignored 
and the given value must have the same units as the input dataset's pixels.
+
+Recall that the total profile magnitude or brightness that is specified with 
in the @option{--mcol} column of the input catalog is not an integration to 
infinity, but the actual sum of pixels in the profile (until the desired 
truncation radius).
+See @ref{Profile magnitude} for more on this point.
 
 @item --magatpeak
-The magnitude column in the catalog (see @ref{MakeProfiles catalog})
-will be used to find the brightness only for the peak profile pixel,
-not the full profile. Note that this is the flux of the profile's peak
-pixel in the final output of MakeProfiles. So beware of the
-oversampling, see @ref{Oversampling}.
-
-This option can be useful if you want to check a mock profile's total
-magnitude at various truncation radii. Without this option, no matter
-what the truncation radius is, the total magnitude will be the same as
-that given in the catalog. But with this option, the total magnitude
-will become brighter as you increase the truncation radius.
-
-In sharper profiles, sometimes the accuracy of measuring the peak
-profile flux is more than the overall object brightness. In such
-cases, with this option, the final profile will be built such that its
-peak has the given magnitude, not the total profile.
+The magnitude column in the catalog (see @ref{MakeProfiles catalog}) will be 
used to find the brightness only for the peak profile pixel, not the full 
profile.
+Note that this is the flux of the profile's peak pixel in the final output of 
MakeProfiles.
+So beware of the oversampling, see @ref{Oversampling}.
+
+This option can be useful if you want to check a mock profile's total 
magnitude at various truncation radii.
+Without this option, no matter what the truncation radius is, the total 
magnitude will be the same as that given in the catalog.
+But with this option, the total magnitude will become brighter as you increase 
the truncation radius.
+
+In sharper profiles, sometimes the accuracy of measuring the peak profile flux 
is more than the overall object brightness.
+In such cases, with this option, the final profile will be built such that its 
peak has the given magnitude, not the total profile.
 
 @cartouche
-@strong{CAUTION:} If you want to use this option for comparing with
-observations, please note that MakeProfiles does not do convolution. Unless
-you have de-convolved your data, your images are convolved with the
-instrument and atmospheric PSF, see @ref{PSF}. Particularly in sharper
-profiles, the flux in the peak pixel is strongly decreased after
-convolution. Also note that in such cases, besides de-convolution, you will
-have to set @option{--oversample=1} otherwise after resampling your profile
-with Warp (see @ref{Warp}), the peak flux will be different.
+@strong{CAUTION:} If you want to use this option for comparing with 
observations, please note that MakeProfiles does not do convolution.
+Unless you have de-convolved your data, your images are convolved with the 
instrument and atmospheric PSF, see @ref{PSF}.
+Particularly in sharper profiles, the flux in the peak pixel is strongly 
decreased after convolution.
+Also note that in such cases, besides de-convolution, you will have to set 
@option{--oversample=1} otherwise after resampling your profile with Warp (see 
@ref{Warp}), the peak flux will be different.
 @end cartouche
 
 @item -X INT,INT
 @itemx --shift=INT,INT
-Shift all the profiles and enlarge the image along each dimension. To
-better understand this option, please see @mymath{n} in @ref{If convolving
-afterwards}. This is useful when you want to convolve the image
-afterwards. If you are using an external PSF, be sure to oversample it to
-the same scale used for creating the mock images. If a background image is
-specified, any possible value to this option is ignored.
+Shift all the profiles and enlarge the image along each dimension.
+To better understand this option, please see @mymath{n} in @ref{If convolving 
afterwards}.
+This is useful when you want to convolve the image afterwards.
+If you are using an external PSF, be sure to oversample it to the same scale 
used for creating the mock images.
+If a background image is specified, any possible value to this option is 
ignored.
 
 @item -c
 @itemx --prepforconv
-Shift all the profiles and enlarge the image based on half the width
-of the first Moffat or Gaussian profile in the catalog, considering
-any possible oversampling see @ref{If convolving
-afterwards}. @option{--prepforconv} is only checked and possibly
-activated if @option{--xshift} and @option{--yshift} are both zero
-(after reading the command-line and configuration files). If a
-background image is specified, any possible value to this option is
-ignored.
+Shift all the profiles and enlarge the image based on half the width of the 
first Moffat or Gaussian profile in the catalog, considering any possible 
oversampling see @ref{If convolving afterwards}.
+@option{--prepforconv} is only checked and possibly activated if 
@option{--xshift} and @option{--yshift} are both zero (after reading the 
command-line and configuration files).
+If a background image is specified, any possible value to this option is 
ignored.
 
 @item -z FLT
 @itemx --zeropoint=FLT
@@ -17675,67 +15883,46 @@ The zero-point magnitude of the image.
 
 @item -w FLT
 @itemx --circumwidth=FLT
-The width of the circumference if the profile is to be an elliptical
-circumference or annulus. See the explanations for this type of profile in
-@option{--fcol}.
+The width of the circumference if the profile is to be an elliptical 
circumference or annulus.
+See the explanations for this type of profile in @option{--fcol}.
 
 @item -R
 @itemx --replace
-Do not add the pixels of each profile over the background (possibly crowded
-by other profiles), replace them. By default, when two profiles overlap,
-the final pixel value is the sum of all the profiles that overlap on that
-pixel. When this option is given, the pixels are not added but replaced by
-the newer profile's pixel and any value under it is lost.
+Do not add the pixels of each profile over the background (possibly crowded by 
other profiles), replace them.
+By default, when two profiles overlap, the final pixel value is the sum of all 
the profiles that overlap on that pixel.
+When this option is given, the pixels are not added but replaced by the newer 
profile's pixel and any value under it is lost.
 
 @cindex CPU threads
 @cindex Threads, CPU
-When order matters, make sure to use this function with
-`@option{--numthreads=1}'. When multiple threads are used, the separate
-profiles are built asynchronously and not in order. Since order does not
-matter in an addition, this causes no problems by default but has to be
-considered when this option is given. Using multiple threads is no problem
-if the profiles are to be used as a mask with a blank or fixed value (see
-`@option{--mforflatpix}') since all their pixel values are the same.
-
-Note that only non-zero pixels are replaced. With radial profiles (for
-example S@'ersic or Moffat) only values above zero will be part of the
-profile. However, when using flat profiles with the
-`@option{--mforflatpix}' option, you should be careful not to give a
-@code{0.0} value as the flat profile's pixel value.
+When order matters, make sure to use this function with 
`@option{--numthreads=1}'.
+When multiple threads are used, the separate profiles are built asynchronously 
and not in order.
+Since order does not matter in an addition, this causes no problems by default 
but has to be considered when this option is given.
+Using multiple threads is no problem if the profiles are to be used as a mask 
with a blank or fixed value (see `@option{--mforflatpix}') since all their 
pixel values are the same.
+
+Note that only non-zero pixels are replaced.
+With radial profiles (for example S@'ersic or Moffat) only values above zero 
will be part of the profile.
+However, when using flat profiles with the `@option{--mforflatpix}' option, 
you should be careful not to give a @code{0.0} value as the flat profile's 
pixel value.
 
 @end table
 
 @node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile 
settings, Invoking astmkprof
 @subsubsection MakeProfiles output dataset
-MakeProfiles takes an input catalog uses basic properties that are defined
-there to build a dataset, for example a 2D image containing the profiles in
-the catalog. In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile
-settings}, the catalog and profile settings were discussed. The options of
-this section, allow you to configure the output dataset (or the canvas that
-will host the built profiles).
+MakeProfiles takes an input catalog uses basic properties that are defined 
there to build a dataset, for example a 2D image containing the profiles in the 
catalog.
+In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the 
catalog and profile settings were discussed.
+The options of this section, allow you to configure the output dataset (or the 
canvas that will host the built profiles).
 
 @table @option
 
 @item -k STR
 @itemx --background=STR
-A background image FITS file to build the profiles on. The extension that
-contains the image should be specified with the @option{--backhdu} option,
-see below. When a background image is specified, it will be used to derive
-all the information about the output image. Hence, the following options
-will be ignored: @option{--mergedsize}, @option{--oversample},
-@option{--crpix}, @option{--crval} (generally, all other WCS related
-parameters) and the output's data type (see @option{--type} in @ref{Input
-output options}).
-
-The image will act like a canvas to build the profiles on: profile pixel
-values will be summed with the background image pixel values. With the
-@option{--replace} option you can disable this behavior and replace the
-profile pixels with the background pixels. If you want to use all the image
-information above, except for the pixel values (you want to have a blank
-canvas to build the profiles on, based on an input image), you can call
-@option{--clearcanvas}, to set all the input image's pixels to zero before
-starting to build the profiles over it (this is done in memory after
-reading the input, so nothing will happen to your input file).
+A background image FITS file to build the profiles on.
+The extension that contains the image should be specified with the 
@option{--backhdu} option, see below.
+When a background image is specified, it will be used to derive all the 
information about the output image.
+Hence, the following options will be ignored: @option{--mergedsize}, 
@option{--oversample}, @option{--crpix}, @option{--crval} (generally, all other 
WCS related parameters) and the output's data type (see @option{--type} in 
@ref{Input output options}).
+
+The image will act like a canvas to build the profiles on: profile pixel 
values will be summed with the background image pixel values.
+With the @option{--replace} option you can disable this behavior and replace 
the profile pixels with the background pixels.
+If you want to use all the image information above, except for the pixel 
values (you want to have a blank canvas to build the profiles on, based on an 
input image), you can call @option{--clearcanvas}, to set all the input image's 
pixels to zero before starting to build the profiles over it (this is done in 
memory after reading the input, so nothing will happen to your input file).
 
 @item -B STR/INT
 @itemx --backhdu=STR/INT
@@ -17743,47 +15930,30 @@ The header data unit (HDU) of the file given to 
@option{--background}.
 
 @item -C
 @itemx --clearcanvas
-When an input image is specified (with the @option{--background} option,
-set all its pixels to 0.0 immediately after reading it into
-memory. Effectively, this will allow you to use all its properties
-(described under the @option{--background} option), without having to worry
-about the pixel values.
-
-@option{--clearcanvas} can come in handy in many situations, for example if
-you want to create a labeled image (segmentation map) for creating a
-catalog (see @ref{MakeCatalog}). In other cases, you might have modeled the
-objects in an image and want to create them on the same frame, but without
-the original pixel values.
+When an input image is specified (with the @option{--background} option, set 
all its pixels to 0.0 immediately after reading it into memory.
+Effectively, this will allow you to use all its properties (described under 
the @option{--background} option), without having to worry about the pixel 
values.
+
+@option{--clearcanvas} can come in handy in many situations, for example if 
you want to create a labeled image (segmentation map) for creating a catalog 
(see @ref{MakeCatalog}).
+In other cases, you might have modeled the objects in an image and want to 
create them on the same frame, but without the original pixel values.
 
 @item -E STR/INT,FLT[,FLT,[...]]
 @itemx --kernel=STR/INT,FLT[,FLT,[...]]
-Only build one kernel profile with the parameters given as the values to
-this option. The different values must be separated by a comma
-(@key{,}). The first value identifies the radial function of the profile,
-either through a string or through a number (see description of
-@option{--fcol} in @ref{MakeProfiles catalog}). Each radial profile needs a
-different total number of parameters: S@'ersic and Moffat functions need 3
-parameters: radial, S@'ersic index or Moffat @mymath{\beta}, and truncation
-radius. The Gaussian function needs two parameters: radial and truncation
-radius. The point function doesn't need any parameters and flat and
-circumference profiles just need one parameter (truncation radius).
-
-The PSF or kernel is a unique (and highly constrained) type of profile: the
-sum of its pixels must be one, its center must be the center of the central
-pixel (in an image with an odd number of pixels on each side), and commonly
-it is circular, so its axis ratio and position angle are one and zero
-respectively. Kernels are commonly necessary for various data analysis and
-data manipulation steps (for example see @ref{Convolve}, and
-@ref{NoiseChisel}. Because of this it is inconvenient to define a catalog
-with one row and many zero valued columns (for all the non-necessary
-parameters). Hence, with this option, it is possible to create a kernel
-with MakeProfiles without the need to create a catalog. Here are some
-examples:
+Only build one kernel profile with the parameters given as the values to this 
option.
+The different values must be separated by a comma (@key{,}).
+The first value identifies the radial function of the profile, either through 
a string or through a number (see description of @option{--fcol} in 
@ref{MakeProfiles catalog}).
+Each radial profile needs a different total number of parameters: S@'ersic and 
Moffat functions need 3 parameters: radial, S@'ersic index or Moffat 
@mymath{\beta}, and truncation radius.
+The Gaussian function needs two parameters: radial and truncation radius.
+The point function doesn't need any parameters and flat and circumference 
profiles just need one parameter (truncation radius).
+
+The PSF or kernel is a unique (and highly constrained) type of profile: the 
sum of its pixels must be one, its center must be the center of the central 
pixel (in an image with an odd number of pixels on each side), and commonly it 
is circular, so its axis ratio and position angle are one and zero respectively.
+Kernels are commonly necessary for various data analysis and data manipulation 
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
+Because of this it is inconvenient to define a catalog with one row and many 
zero valued columns (for all the non-necessary parameters).
+Hence, with this option, it is possible to create a kernel with MakeProfiles 
without the need to create a catalog.
+Here are some examples:
 
 @table @option
 @item --kernel=moffat,3,2.8,5
-A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is
-truncated at 5 times the FWHM.
+A Moffat kernel with FWHM of 3 pixels, @mymath{\beta=2.8} which is truncated 
at 5 times the FWHM.
 
 @item --kernel=gaussian,2,3
 A Gaussian kernel with FWHM of 2 pixels and truncated at 3 times the FWHM.
@@ -17791,168 +15961,118 @@ A Gaussian kernel with FWHM of 2 pixels and 
truncated at 3 times the FWHM.
 
 @item -x INT,INT
 @itemx --mergedsize=INT,INT
-The number of pixels along each axis of the output, in FITS order. This is
-before over-sampling. For example if you call MakeProfiles with
-@option{--mergedsize=100,150 --oversample=5} (assuming no shift due for
-later convolution), then the final image size along the first axis will be
-500 by 750 pixels. Fractions are acceptable as values for each dimension,
-however, they must reduce to an integer, so
-@option{--mergedsize=150/3,300/3} is acceptable but
-@option{--mergedsize=150/4,300/4} is not.
-
-When viewing a FITS image in DS9, the first FITS dimension is in the
-horizontal direction and the second is vertical. As an example, the image
-created with the example above will have 500 pixels horizontally and 750
-pixels vertically.
+The number of pixels along each axis of the output, in FITS order.
+This is before over-sampling.
+For example if you call MakeProfiles with @option{--mergedsize=100,150 
--oversample=5} (assuming no shift due for later convolution), then the final 
image size along the first axis will be 500 by 750 pixels.
+Fractions are acceptable as values for each dimension, however, they must 
reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but 
@option{--mergedsize=150/4,300/4} is not.
+
+When viewing a FITS image in DS9, the first FITS dimension is in the 
horizontal direction and the second is vertical.
+As an example, the image created with the example above will have 500 pixels 
horizontally and 750 pixels vertically.
 
 If a background image is specified, this option is ignored.
 
 @item -s INT
 @itemx --oversample=INT
-The scale to over-sample the profiles and final image. If not an odd
-number, will be added by one, see @ref{Oversampling}. Note that this
-@option{--oversample} will remain active even if an input image is
-specified. If your input catalog is based on the background image, be sure
-to set @option{--oversample=1}.
+The scale to over-sample the profiles and final image.
+If not an odd number, will be added by one, see @ref{Oversampling}.
+Note that this @option{--oversample} will remain active even if an input image 
is specified.
+If your input catalog is based on the background image, be sure to set 
@option{--oversample=1}.
 
 @item --psfinimg
-Build the possibly existing PSF profiles (Moffat or Gaussian) in the
-catalog into the final image. By default they are built separately so
-you can convolve your images with them, thus their magnitude and
-positions are ignored. With this option, they will be built in the
-final image like every other galaxy profile. To have a final PSF in
-your image, make a point profile where you want the PSF and after
-convolution it will be the PSF.
+Build the possibly existing PSF profiles (Moffat or Gaussian) in the catalog 
into the final image.
+By default they are built separately so you can convolve your images with 
them, thus their magnitude and positions are ignored.
+With this option, they will be built in the final image like every other 
galaxy profile.
+To have a final PSF in your image, make a point profile where you want the PSF 
and after convolution it will be the PSF.
 
 @item -i
 @itemx --individual
 @cindex Individual profiles
 @cindex Build individual profiles
-If this option is called, each profile is created in a separate FITS
-file within the same directory as the output and the row number of the
-profile (starting from zero) in the name. The file for each row's
-profile will be in the same directory as the final combined image of
-all the profiles and will have the final image's name as a suffix. So
-for example if the final combined image is named
-@file{./out/fromcatalog.fits}, then the first profile that will be
-created with this option will be named
-@file{./out/0_fromcatalog.fits}.
-
-Since each image only has one full profile out to the truncation
-radius the profile is centered and so, only the sub-pixel position of
-the profile center is important for the outputs of this option. The
-output will have an odd number of pixels. If there is no oversampling,
-the central pixel will contain the profile center. If the value to
-@option{--oversample} is larger than unity, then the profile center is
-on any of the central @option{--oversample}'d pixels depending on the
-fractional value of the profile center.
-
-If the fractional value is larger than half, it is on the bottom half
-of the central region. This is due to the FITS definition of a real
-number position: The center of a pixel has fractional value
-@mymath{0.00} so each pixel contains these fractions: .5 -- .75 -- .00
-(pixel center) -- .25 -- .5.
+If this option is called, each profile is created in a separate FITS file 
within the same directory as the output and the row number of the profile 
(starting from zero) in the name.
+The file for each row's profile will be in the same directory as the final 
combined image of all the profiles and will have the final image's name as a 
suffix.
+So for example if the final combined image is named 
@file{./out/fromcatalog.fits}, then the first profile that will be created with 
this option will be named @file{./out/0_fromcatalog.fits}.
+
+Since each image only has one full profile out to the truncation radius the 
profile is centered and so, only the sub-pixel position of the profile center 
is important for the outputs of this option.
+The output will have an odd number of pixels.
+If there is no oversampling, the central pixel will contain the profile center.
+If the value to @option{--oversample} is larger than unity, then the profile 
center is on any of the central @option{--oversample}'d pixels depending on the 
fractional value of the profile center.
+
+If the fractional value is larger than half, it is on the bottom half of the 
central region.
+This is due to the FITS definition of a real number position: The center of a 
pixel has fractional value @mymath{0.00} so each pixel contains these 
fractions: .5 -- .75 -- .00 (pixel center) -- .25 -- .5.
 
 @item -m
 @itemx --nomerged
-Don't make a merged image. By default after making the profiles, they are
-added to a final image with side lengths specified by @option{--mergedsize}
-if they overlap with it.
+Don't make a merged image.
+By default after making the profiles, they are added to a final image with 
side lengths specified by @option{--mergedsize} if they overlap with it.
 
 @end table
 
 
 @noindent
-The options below can be used to define the world coordinate system (WCS)
-properties of the MakeProfiles outputs. The option names are deliberately
-chosen to be the same as the FITS standard WCS keywords. See Section 8 of
-@url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [2010]} for a
-short introduction to WCS in the FITS standard@footnote{The world
-coordinate standard in FITS is a very beautiful and powerful concept to
-link/associate datasets with the outside world (other datasets). The
-description in the FITS standard (link above) only touches the tip of the
-ice-burg. To learn more please see
-@url{https://doi.org/10.1051/0004-6361:20021326, Greisen and Calabretta
-[2002]}, @url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and
-Greisen [2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen
-et al. [2006]}, and
-@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf,
-Calabretta et al.}}.
-
-If you look into the headers of a FITS image with WCS for example you will
-see all these names but in uppercase and with numbers to represent the
-dimensions, for example @code{CRPIX1} and @code{PC2_1}. You can see the
-FITS headers with Gnuastro's @ref{Fits} program using a command like this:
-@command{$ astfits -p image.fits}.
-
-If the values given to any of these options does not correspond to the
-number of dimensions in the output dataset, then no WCS information will be
-added.
+The options below can be used to define the world coordinate system (WCS) 
properties of the MakeProfiles outputs.
+The option names are deliberately chosen to be the same as the FITS standard 
WCS keywords.
+See Section 8 of @url{https://doi.org/10.1051/0004-6361/201015362, Pence et al 
[2010]} for a short introduction to WCS in the FITS standard@footnote{The world 
coordinate standard in FITS is a very beautiful and powerful concept to 
link/associate datasets with the outside world (other datasets).
+The description in the FITS standard (link above) only touches the tip of the 
ice-burg.
+To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326, 
Greisen and Calabretta [2002]}, 
@url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen 
[2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al. 
[2006]}, and 
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta 
et al.}}.
+
+If you look into the headers of a FITS image with WCS for example you will see 
all these names but in uppercase and with numbers to represent the dimensions, 
for example @code{CRPIX1} and @code{PC2_1}.
+You can see the FITS headers with Gnuastro's @ref{Fits} program using a 
command like this: @command{$ astfits -p image.fits}.
+
+If the values given to any of these options does not correspond to the number 
of dimensions in the output dataset, then no WCS information will be added.
 
 @table @option
 
 @item --crpix=FLT,FLT
-The pixel coordinates of the WCS reference point. Fractions are acceptable
-for the values of this option.
+The pixel coordinates of the WCS reference point.
+Fractions are acceptable for the values of this option.
 
 @item --crval=FLT,FLT
-The WCS coordinates of the Reference point. Fractions are acceptable for
-the values of this option.
+The WCS coordinates of the Reference point.
+Fractions are acceptable for the values of this option.
 
 @item --cdelt=FLT,FLT
-The resolution (size of one data-unit or pixel in WCS units) of the
-non-oversampled dataset. Fractions are acceptable for the values of this
-option.
+The resolution (size of one data-unit or pixel in WCS units) of the 
non-oversampled dataset.
+Fractions are acceptable for the values of this option.
 
 @item --pc=FLT,FLT,FLT,FLT
-The PC matrix of the WCS rotation, see the FITS standard (link above) to
-better understand the PC matrix.
+The PC matrix of the WCS rotation, see the FITS standard (link above) to 
better understand the PC matrix.
 
 @item --cunit=STR,STR
-The units of each WCS axis, for example @code{deg}. Note that these values
-are part of the FITS standard (link above). MakeProfiles won't complain if
-you use non-standard values, but later usage of them might cause trouble.
+The units of each WCS axis, for example @code{deg}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
 
 @item --ctype=STR,STR
-The type of each WCS axis, for example @code{RA---TAN} and
-@code{DEC--TAN}. Note that these values are part of the FITS standard (link
-above). MakeProfiles won't complain if you use non-standard values, but
-later usage of them might cause trouble.
+The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
+Note that these values are part of the FITS standard (link above).
+MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
 
 @end table
 
 @node MakeProfiles log file,  , MakeProfiles output dataset, Invoking astmkprof
 @subsubsection MakeProfiles log file
 
-Besides the final merged dataset of all the profiles, or the individual
-datasets (see @ref{MakeProfiles output dataset}), if the @option{--log}
-option is called MakeProfiles will also create a log file in the current
-directory (where you run MockProfiles). See @ref{Common options} for a full
-description of @option{--log} and other options that are shared between all
-Gnuastro programs. The values for each column are explained in the first
-few commented lines of the log file (starting with @command{#}
-character). Here is a more complete description.
+Besides the final merged dataset of all the profiles, or the individual 
datasets (see @ref{MakeProfiles output dataset}), if the @option{--log} option 
is called MakeProfiles will also create a log file in the current directory 
(where you run MockProfiles).
+See @ref{Common options} for a full description of @option{--log} and other 
options that are shared between all Gnuastro programs.
+The values for each column are explained in the first few commented lines of 
the log file (starting with @command{#} character).
+Here is a more complete description.
 
 @itemize
 @item
 An ID (row number of profile in input catalog).
 
 @item
-The total magnitude of the profile in the output dataset. When the profile
-does not completely overlap with the output dataset, this will be different
-from your input magnitude.
+The total magnitude of the profile in the output dataset.
+When the profile does not completely overlap with the output dataset, this 
will be different from your input magnitude.
 
 @item
-The number of pixels (in the oversampled image) which used Monte Carlo
-integration and not the central pixel value, see @ref{Sampling from a
-function}.
+The number of pixels (in the oversampled image) which used Monte Carlo 
integration and not the central pixel value, see @ref{Sampling from a function}.
 
 @item
 The fraction of flux in the Monte Carlo integrated pixels.
 
 @item
-If an individual image was created, this column will have a value of
-@code{1}, otherwise it will have a value of @code{0}.
+If an individual image was created, this column will have a value of @code{1}, 
otherwise it will have a value of @code{0}.
 @end itemize
 
 
@@ -17970,13 +16090,9 @@ If an individual image was created, this column will 
have a value of
 @section MakeNoise
 
 @cindex Noise
-Real data are always buried in noise, therefore to finalize a
-simulation of real data (for example to test our observational
-algorithms) it is essential to add noise to the mock profiles created
-with MakeProfiles, see @ref{MakeProfiles}. Below, the general
-principles and concepts to help understand how noise is quantified is
-discussed.  MakeNoise options and argument are then discussed in
-@ref{Invoking astmknoise}.
+Real data are always buried in noise, therefore to finalize a simulation of 
real data (for example to test our observational algorithms) it is essential to 
add noise to the mock profiles created with MakeProfiles, see 
@ref{MakeProfiles}.
+Below, the general principles and concepts to help understand how noise is 
quantified is discussed.
+MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.
 
 @menu
 * Noise basics::                Noise concepts and definitions.
@@ -17990,17 +16106,11 @@ discussed.  MakeNoise options and argument are then 
discussed in
 
 @cindex Noise
 @cindex Image noise
-Deep astronomical images, like those used in extragalactic studies,
-seriously suffer from noise in the data. Generally speaking, the sources of
-noise in an astronomical image are photon counting noise and Instrumental
-noise which are discussed in @ref{Photon counting noise} and
-@ref{Instrumental noise}. This review finishes with @ref{Generating random
-numbers} which is a short introduction on how random numbers are generated.
-We will see that while software random number generators are not perfect,
-they allow us to obtain a reproducible series of random numbers through
-setting the random number generator function and seed value. Therefore in
-this section, we'll also discuss how you can set these two parameters in
-Gnuastro's programs (including MakeNoise).
+Deep astronomical images, like those used in extragalactic studies, seriously 
suffer from noise in the data.
+Generally speaking, the sources of noise in an astronomical image are photon 
counting noise and Instrumental noise which are discussed in @ref{Photon 
counting noise} and @ref{Instrumental noise}.
+This review finishes with @ref{Generating random numbers} which is a short 
introduction on how random numbers are generated.
+We will see that while software random number generators are not perfect, they 
allow us to obtain a reproducible series of random numbers through setting the 
random number generator function and seed value.
+Therefore in this section, we'll also discuss how you can set these two 
parameters in Gnuastro's programs (including MakeNoise).
 
 @menu
 * Photon counting noise::       Poisson noise
@@ -18017,98 +16127,60 @@ Gnuastro's programs (including MakeNoise).
 @cindex Poisson distribution
 @cindex Photon counting noise
 @cindex Poisson, Sim@'eon Denis
-With the very accurate electronics used in today's detectors, photon
-counting noise@footnote{In practice, we are actually counting the electrons
-that are produced by each photon, not the actual photons.} is the most
-significant source of uncertainty in most datasets. To understand this
-noise (error in counting), we need to take a closer look at how a
-distribution produced by counting can be modeled as a parametric function.
-
-Counting is an inherently discrete operation, which can only produce
-positive (including zero) integer outputs. For example we can't count
-@mymath{3.2} or @mymath{-2} of anything. We only count @mymath{0},
-@mymath{1}, @mymath{2}, @mymath{3} and so on. The distribution of values,
-as a result of counting efforts is formally known as the
-@url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson
-distribution}. It is associated to Sim@'eon Denis Poisson, because he
-discussed it while working on the number of wrongful convictions in court
-cases in his 1837 book@footnote{[From Wikipedia] Poisson's result was also
-derived in a previous study by Abraham de Moivre in 1711. Therefore some
-people suggest it should rightly be called the de Moivre distribution.}.
+With the very accurate electronics used in today's detectors, photon counting 
noise@footnote{In practice, we are actually counting the electrons that are 
produced by each photon, not the actual photons.} is the most significant 
source of uncertainty in most datasets.
+To understand this noise (error in counting), we need to take a closer look at 
how a distribution produced by counting can be modeled as a parametric function.
+
+Counting is an inherently discrete operation, which can only produce positive 
(including zero) integer outputs.
+For example we can't count @mymath{3.2} or @mymath{-2} of anything.
+We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
+The distribution of values, as a result of counting efforts is formally known 
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson 
distribution}.
+It is associated to Sim@'eon Denis Poisson, because he discussed it while 
working on the number of wrongful convictions in court cases in his 1837 
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous 
study by Abraham de Moivre in 1711.
+Therefore some people suggest it should rightly be called the de Moivre 
distribution.}.
 
 @cindex Probability density function
-Let's take @mymath{\lambda} to represent the expected mean count of
-something. Furthermore, let's take @mymath{k} to represent the result of
-one particular counting attempt. The probability density function of
-getting @mymath{k} counts (in each attempt, given the expected/mean count
-of @mymath{\lambda}) can be written as:
+Let's take @mymath{\lambda} to represent the expected mean count of something.
+Furthermore, let's take @mymath{k} to represent the result of one particular 
counting attempt.
+The probability density function of getting @mymath{k} counts (in each 
attempt, given the expected/mean count of @mymath{\lambda}) can be written as:
 
 @cindex Poisson distribution
-@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2,
-3, \dots @}}
+@dispmath{f(k)={\lambda^k \over k!} e^{-\lambda},\quad k\in @{0, 1, 2, 3, 
\dots @}}
 
 @cindex Skewed Poisson distribution
-Because the Poisson distribution is only applicable to positive values
-(note the factorial operator, which only applies to non-negative integers),
-naturally it is very skewed when @mymath{\lambda} is near zero. One
-qualitative way to understand this behavior is that there simply aren't
-enough integers smaller than @mymath{\lambda}, than integers that are
-larger than it. Therefore to accommodate all possibilities/counts, it has
-to be strongly skewed when @mymath{\lambda} is small.
+Because the Poisson distribution is only applicable to positive values (note 
the factorial operator, which only applies to non-negative integers), naturally 
it is very skewed when @mymath{\lambda} is near zero.
+One qualitative way to understand this behavior is that there simply aren't 
enough integers smaller than @mymath{\lambda}, than integers that are larger 
than it.
+Therefore to accommodate all possibilities/counts, it has to be strongly 
skewed when @mymath{\lambda} is small.
 
 @cindex Compare Poisson and Gaussian
-As @mymath{\lambda} becomes larger, the distribution becomes more and more
-symmetric. A very useful property of the Poisson distribution is that the
-mean value is also its variance.  When @mymath{\lambda} is very large, say
-@mymath{\lambda>1000}, then the
-@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian)
-distribution}, is an excellent approximation of the Poisson distribution
-with mean @mymath{\mu=\lambda} and standard deviation
-@mymath{\sigma=\sqrt{\lambda}}. In other words, a Poisson distribution
-(with a sufficiently large @mymath{\lambda}) is simply a Gaussian that only
-has one free parameter (@mymath{\mu=\lambda} and
-@mymath{\sigma=\sqrt{\lambda}}), instead of the two parameters (independent
-@mymath{\mu} and @mymath{\sigma}) that it originally has.
+As @mymath{\lambda} becomes larger, the distribution becomes more and more 
symmetric.
+A very useful property of the Poisson distribution is that the mean value is 
also its variance.
+When @mymath{\lambda} is very large, say @mymath{\lambda>1000}, then the 
@url{https://en.wikipedia.org/wiki/Normal_distribution, Normal (Gaussian) 
distribution}, is an excellent approximation of the Poisson distribution with 
mean @mymath{\mu=\lambda} and standard deviation @mymath{\sigma=\sqrt{\lambda}}.
+In other words, a Poisson distribution (with a sufficiently large 
@mymath{\lambda}) is simply a Gaussian that only has one free parameter 
(@mymath{\mu=\lambda} and @mymath{\sigma=\sqrt{\lambda}}), instead of the two 
parameters (independent @mymath{\mu} and @mymath{\sigma}) that it originally 
has.
 
 @cindex Sky value
 @cindex Background flux
 @cindex Undetected objects
-In real situations, the photons/flux from our targets are added to a
-certain background flux (observationally, the @emph{Sky} value). The Sky
-value is defined to be the average flux of a region in the dataset with no
-targets. Its physical origin can be the brightness of the atmosphere (for
-ground-based instruments), possible stray light within the imaging
-instrument, the average flux of undetected targets, etc. The Sky value
-is thus an ideal definition, because in real datasets, what lies deep in
-the noise (far lower than the detection limit) is never known@footnote{In a
-real image, a relatively large number of very faint objects can been fully
-buried in the noise and never detected. These undetected objects will bias
-the background measurement to slightly larger values. Our best
-approximation is thus to simply assume they are uniform, and consider their
-average effect. See Figure 1 (a.1 and a.2) and Section 2.2 in
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}. To
-account for all of these, the sky value is defined to be the average
-count/value of the undetected regions in the image. In a mock
-image/dataset, we have the luxury of setting the background (Sky) value.
+In real situations, the photons/flux from our targets are added to a certain 
background flux (observationally, the @emph{Sky} value).
+The Sky value is defined to be the average flux of a region in the dataset 
with no targets.
+Its physical origin can be the brightness of the atmosphere (for ground-based 
instruments), possible stray light within the imaging instrument, the average 
flux of undetected targets, etc.
+The Sky value is thus an ideal definition, because in real datasets, what lies 
deep in the noise (far lower than the detection limit) is never 
known@footnote{In a real image, a relatively large number of very faint objects 
can been fully buried in the noise and never detected.
+These undetected objects will bias the background measurement to slightly 
larger values.
+Our best approximation is thus to simply assume they are uniform, and consider 
their average effect.
+See Figure 1 (a.1 and a.2) and Section 2.2 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.}.
+To account for all of these, the sky value is defined to be the average 
count/value of the undetected regions in the image.
+In a mock image/dataset, we have the luxury of setting the background (Sky) 
value.
 
 @cindex Simulating noise
 @cindex Noise simulation
-In each element of the dataset (pixel in an image), the flux is the sum of
-contributions from various sources (after convolution by the PSF, see
-@ref{PSF}). Let's name the convolved sum of possibly overlapping objects,
-@mymath{I_{nn}}.  @mymath{nn} representing `no noise'. For now, let's
-assume the background (@mymath{B}) is constant and sufficiently high for
-the Poisson distribution to be approximated by a Gaussian. Then the flux
-after adding noise is a random value taken from a Gaussian distribution
-with the following mean (@mymath{\mu}) and standard deviation
-(@mymath{\sigma}):
+In each element of the dataset (pixel in an image), the flux is the sum of 
contributions from various sources (after convolution by the PSF, see 
@ref{PSF}).
+Let's name the convolved sum of possibly overlapping objects, @mymath{I_{nn}}.
+@mymath{nn} representing `no noise'.
+For now, let's assume the background (@mymath{B}) is constant and sufficiently 
high for the Poisson distribution to be approximated by a Gaussian.
+Then the flux after adding noise is a random value taken from a Gaussian 
distribution with the following mean (@mymath{\mu}) and standard deviation 
(@mymath{\sigma}):
 
 @dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{B+I_{nn}}}
 
-Since this type of noise is inherent in the objects we study, it is
-usually measured on the same scale as the astronomical objects, namely
-the magnitude system, see @ref{Flux Brightness and magnitude}. It is
-then internally converted to the flux scale for further processing.
+Since this type of noise is inherent in the objects we study, it is usually 
measured on the same scale as the astronomical objects, namely the magnitude 
system, see @ref{Flux Brightness and magnitude}.
+It is then internally converted to the flux scale for further processing.
 
 @node Instrumental noise, Final noised pixel value, Photon counting noise, 
Noise basics
 @subsubsection Instrumental noise
@@ -18116,38 +16188,27 @@ then internally converted to the flux scale for 
further processing.
 @cindex Readout noise
 @cindex Instrumental noise
 @cindex Noise, instrumental
-While taking images with a camera, a dark current is fed to the pixels, the
-variation of the value of this dark current over the pixels, also adds to
-the final image noise. Another source of noise is the readout noise that is
-produced by the electronics in the detector. Specifically, the parts that
-attempt to digitize the voltage produced by the photo-electrons in the
-analog to digital converter. With the current generation of instruments,
-this source of noise is not as significant as the noise due to the
-background Sky discussed in @ref{Photon counting noise}.
-
-Let @mymath{C} represent the combined standard deviation of all these
-instrumental sources of noise. When only this source of noise is present,
-the noised pixel value would be a random value chosen from a Gaussian
-distribution with
+While taking images with a camera, a dark current is fed to the pixels, the 
variation of the value of this dark current over the pixels, also adds to the 
final image noise.
+Another source of noise is the readout noise that is produced by the 
electronics in the detector.
+Specifically, the parts that attempt to digitize the voltage produced by the 
photo-electrons in the analog to digital converter.
+With the current generation of instruments, this source of noise is not as 
significant as the noise due to the background Sky discussed in @ref{Photon 
counting noise}.
+
+Let @mymath{C} represent the combined standard deviation of all these 
instrumental sources of noise.
+When only this source of noise is present, the noised pixel value would be a 
random value chosen from a Gaussian distribution with
 
 @dispmath{\mu=I_{nn}, \quad \sigma=\sqrt{C^2+I_{nn}}}
 
 @cindex ADU
 @cindex Gain
 @cindex Counts
-This type of noise is independent of the signal in the dataset, it is only
-determined by the instrument. So the flux scale (and not magnitude scale)
-is most commonly used for this type of noise. In practice, this value is
-usually reported in analog-to-digital units or ADUs, not flux or electron
-counts. The gain value of the device can be used to convert between these
-two, see @ref{Flux Brightness and magnitude}.
+This type of noise is independent of the signal in the dataset, it is only 
determined by the instrument.
+So the flux scale (and not magnitude scale) is most commonly used for this 
type of noise.
+In practice, this value is usually reported in analog-to-digital units or 
ADUs, not flux or electron counts.
+The gain value of the device can be used to convert between these two, see 
@ref{Flux Brightness and magnitude}.
 
 @node Final noised pixel value, Generating random numbers, Instrumental noise, 
Noise basics
 @subsubsection Final noised pixel value
-Based on the discussions in @ref{Photon counting noise} and
-@ref{Instrumental noise}, depending on the values you specify for
-@mymath{B} and @mymath{C} from the above, the final noised value for each
-pixel is a random value chosen from a Gaussian distribution with
+Based on the discussions in @ref{Photon counting noise} and @ref{Instrumental 
noise}, depending on the values you specify for @mymath{B} and @mymath{C} from 
the above, the final noised value for each pixel is a random value chosen from 
a Gaussian distribution with
 
 @dispmath{\mu=B+I_{nn}, \quad \sigma=\sqrt{C^2+B+I_{nn}}}
 
@@ -18158,72 +16219,45 @@ pixel is a random value chosen from a Gaussian 
distribution with
 
 @cindex Random numbers
 @cindex Numbers, random
-As discussed above, to generate noise we need to make random samples
-of a particular distribution. So it is important to understand some
-general concepts regarding the generation of random numbers. For a
-very complete and nice introduction we strongly advise reading Donald
-Knuth's ``The art of computer programming'', volume 2, chapter
-3@footnote{Knuth, Donald. 1998. The art of computer
-programming. Addison--Wesley. ISBN 0-201-89684-2 }. Quoting from the
-GNU Scientific Library manual, ``If you don't own it, you should stop
-reading right now, run to the nearest bookstore, and buy
-it''@footnote{For students, running to the library might be more
-affordable!}!
+As discussed above, to generate noise we need to make random samples of a 
particular distribution.
+So it is important to understand some general concepts regarding the 
generation of random numbers.
+For a very complete and nice introduction we strongly advise reading Donald 
Knuth's ``The art of computer programming'', volume 2, chapter 
3@footnote{Knuth, Donald. 1998.
+The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
+Quoting from the GNU Scientific Library manual, ``If you don't own it, you 
should stop reading right now, run to the nearest bookstore, and buy 
it''@footnote{For students, running to the library might be more affordable!}!
 
 @cindex Psuedo-random numbers
 @cindex Numbers, psuedo-random
-Using only software, we can only produce what is called a psuedo-random
-sequence of numbers. A true random number generator is a hardware (let's
-assume we have made sure it has no systematic biases), for example
-throwing dice or flipping coins (which have remained from the ancient
-times). More modern hardware methods use atmospheric noise, thermal
-noise or other types of external electromagnetic or quantum
-phenomena. All pseudo-random number generators (software) require a
-seed to be the basis of the generation. The advantage of having a seed
-is that if you specify the same seed for multiple runs, you will get
-an identical sequence of random numbers which allows you to reproduce
-the same final noised image.
+Using only software, we can only produce what is called a psuedo-random 
sequence of numbers.
+A true random number generator is a hardware (let's assume we have made sure 
it has no systematic biases), for example throwing dice or flipping coins 
(which have remained from the ancient times).
+More modern hardware methods use atmospheric noise, thermal noise or other 
types of external electromagnetic or quantum phenomena.
+All pseudo-random number generators (software) require a seed to be the basis 
of the generation.
+The advantage of having a seed is that if you specify the same seed for 
multiple runs, you will get an identical sequence of random numbers which 
allows you to reproduce the same final noised image.
 
 @cindex Environment variables
 @cindex GNU Scientific Library
-The programs in GNU Astronomy Utilities (for example MakeNoise or
-MakeProfiles) use the GNU Scientific Library (GSL) to generate random
-numbers. GSL allows the user to set the random number generator
-through environment variables, see @ref{Installation directory} for an
-introduction to environment variables. In the chapter titled ``Random
-Number Generation'' they have fully explained the various random
-number generators that are available (there are a lot of
-them!). Through the two environment variables @code{GSL_RNG_TYPE} and
-@code{GSL_RNG_SEED} you can specify the generator and its seed
-respectively.
+The programs in GNU Astronomy Utilities (for example MakeNoise or 
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
+GSL allows the user to set the random number generator through environment 
variables, see @ref{Installation directory} for an introduction to environment 
variables.
+In the chapter titled ``Random Number Generation'' they have fully explained 
the various random number generators that are available (there are a lot of 
them!).
+Through the two environment variables @code{GSL_RNG_TYPE} and 
@code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
 
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its
-default random number generator type. The default type is sufficient for
-most general applications. If no value is given for the @code{GSL_RNG_SEED}
-environment variable and you have asked Gnuastro to read the seed from the
-environment (through the @option{--envseed} option), then GSL will use the
-default value of each generator to give identical outputs. If you don't
-explicitly tell Gnuastro programs to read the seed value from the
-environment variable, then they will use the system time (accurate to
-within a microsecond) to generate (apparently random) seeds. In this
-manner, every time you run the program, you will get a different random
-number distribution.
-
-There are two ways you can specify values for these environment
-variables. You can call them on the same command-line for example:
+If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default 
random number generator type.
+The default type is sufficient for most general applications.
+If no value is given for the @code{GSL_RNG_SEED} environment variable and you 
have asked Gnuastro to read the seed from the environment (through the 
@option{--envseed} option), then GSL will use the default value of each 
generator to give identical outputs.
+If you don't explicitly tell Gnuastro programs to read the seed value from the 
environment variable, then they will use the system time (accurate to within a 
microsecond) to generate (apparently random) seeds.
+In this manner, every time you run the program, you will get a different 
random number distribution.
+
+There are two ways you can specify values for these environment variables.
+You can call them on the same command-line for example:
 
 @example
 $ GSL_RNG_TYPE="taus" GSL_RNG_SEED=345 astmknoise input.fits
 @end example
 
 @noindent
-In this manner the values will only be used for this particular execution
-of MakeNoise. Alternatively, you can define them for the full period of
-your terminal session or script length, using the shell's @command{export}
-command with the two separate commands below (for a script remove the
-@code{$} signs):
+In this manner the values will only be used for this particular execution of 
MakeNoise.
+Alternatively, you can define them for the full period of your terminal 
session or script length, using the shell's @command{export} command with the 
two separate commands below (for a script remove the @code{$} signs):
 
 @example
 $ export GSL_RNG_TYPE="taus"
@@ -18233,23 +16267,14 @@ $ export GSL_RNG_SEED=345
 @cindex Startup scripts
 @cindex @file{.bashrc}
 @noindent
-The subsequent programs which use GSL's random number generators will hence
-forth use these values in this session of the terminal you are running or
-while executing this script. In case you want to set fixed values for these
-parameters every time you use the GSL random number generator, you can add
-these two lines to your @file{.bashrc} startup script@footnote{Don't forget
-that if you are going to give your scripts (that use the GSL random number
-generator) to others you have to make sure you also tell them to set these
-environment variable separately. So for scripts, it is best to keep all
-such variable definitions within the script, even if they are within your
-@file{.bashrc}.}, see @ref{Installation directory}.
+The subsequent programs which use GSL's random number generators will hence 
forth use these values in this session of the terminal you are running or while 
executing this script.
+In case you want to set fixed values for these parameters every time you use 
the GSL random number generator, you can add these two lines to your 
@file{.bashrc} startup script@footnote{Don't forget that if you are going to 
give your scripts (that use the GSL random number generator) to others you have 
to make sure you also tell them to set these environment variable separately.
+So for scripts, it is best to keep all such variable definitions within the 
script, even if they are within your @file{.bashrc}.}, see @ref{Installation 
directory}.
 
 @cartouche
 @noindent
-@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE}
-and @code{GSL_RNG_SEED} are defined, GSL will report them by default,
-even if you don't use the @option{--envseed} option. For example you
-can see the top few lines of the output of MakeProfiles:
+@strong{NOTE:} If the two environment variables @code{GSL_RNG_TYPE} and 
@code{GSL_RNG_SEED} are defined, GSL will report them by default, even if you 
don't use the @option{--envseed} option.
+For example you can see the top few lines of the output of MakeProfiles:
 
 @example
 $ export GSL_RNG_TYPE="taus"
@@ -18268,22 +16293,18 @@ MakeProfiles finished in 0.111271 seconds
 @noindent
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-The first two output lines (showing the names of the environment
-variables) are printed by GSL before MakeProfiles actually starts
-generating random numbers. The Gnuastro programs will report the
-values they use independently, you should check them for the final
-values used. For example if @option{--envseed} is not given,
-@code{GSL_RNG_SEED} will not be used and the last line shown above
-will not be printed. In the case of MakeProfiles, each profile will
-get its own seed value.
+The first two output lines (showing the names of the environment variables) 
are printed by GSL before MakeProfiles actually starts generating random 
numbers.
+The Gnuastro programs will report the values they use independently, you 
should check them for the final values used.
+For example if @option{--envseed} is not given, @code{GSL_RNG_SEED} will not 
be used and the last line shown above will not be printed.
+In the case of MakeProfiles, each profile will get its own seed value.
 @end cartouche
 
 
 @node Invoking astmknoise,  , Noise basics, MakeNoise
 @subsection Invoking MakeNoise
 
-MakeNoise will add noise to an existing image. The executable name is
-@file{astmknoise} with the following general template
+MakeNoise will add noise to an existing image.
+The executable name is @file{astmknoise} with the following general template
 
 @example
 $ astmknoise [OPTION ...] InputImage.fits
@@ -18302,55 +16323,44 @@ $ astmknoise --background=-10 -z0 --instrumental=20 
mockimage.fits
 @end example
 
 @noindent
-If actual processing is to be done, the input image is a mandatory
-argument. The full list of options common to all the programs in Gnuastro
-can be seen in @ref{Common options}. The type (see @ref{Numeric data
-types}) of the output can be specified with the @option{--type} option, see
-@ref{Input output options}. The header of the output FITS file keeps all
-the parameters that were influential in making it. This is done for future
-reproducibility.
+If actual processing is to be done, the input image is a mandatory argument.
+The full list of options common to all the programs in Gnuastro can be seen in 
@ref{Common options}.
+The type (see @ref{Numeric data types}) of the output can be specified with 
the @option{--type} option, see @ref{Input output options}.
+The header of the output FITS file keeps all the parameters that were 
influential in making it.
+This is done for future reproducibility.
 
 @table @option
 
 @item -s FLT
 @item --sigma=FLT
-The total noise sigma in the same units as the pixel values. With this
-option, the @option{--background}, @option{--zeropoint} and
-@option{--instrumental} will be ignored. With this option, the noise will
-be independent of the pixel values (which is not realistic, see @ref{Photon
-counting noise}). Hence it is only useful if you are working on low surface
-brightness regions where the change in pixel value (and thus real noise) is
-insignificant.
+The total noise sigma in the same units as the pixel values.
+With this option, the @option{--background}, @option{--zeropoint} and 
@option{--instrumental} will be ignored.
+With this option, the noise will be independent of the pixel values (which is 
not realistic, see @ref{Photon counting noise}).
+Hence it is only useful if you are working on low surface brightness regions 
where the change in pixel value (and thus real noise) is insignificant.
 
 @item -b FLT
 @itemx --background=FLT
-The background pixel value for the image in units of magnitudes, see
-@ref{Photon counting noise} and @ref{Flux Brightness and magnitude}.
+The background pixel value for the image in units of magnitudes, see 
@ref{Photon counting noise} and @ref{Flux Brightness and magnitude}.
 
 @item -z FLT
 @itemx --zeropoint=FLT
-The zeropoint magnitude used to convert the value of @option{--background}
-(in units of magnitude) to flux, see @ref{Flux Brightness and magnitude}.
+The zeropoint magnitude used to convert the value of @option{--background} (in 
units of magnitude) to flux, see @ref{Flux Brightness and magnitude}.
 
 @item -i FLT
 @itemx --instrumental=FLT
-The instrumental noise which is in units of flux, see @ref{Instrumental
-noise}.
+The instrumental noise which is in units of flux, see @ref{Instrumental noise}.
 
 @item -e
 @itemx --envseed
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-Use the @code{GSL_RNG_SEED} environment variable for the seed used in
-the random number generator, see @ref{Generating random numbers}. With
-this option, the output image noise is always going to be identical
-(or reproducible).
+Use the @code{GSL_RNG_SEED} environment variable for the seed used in the 
random number generator, see @ref{Generating random numbers}.
+With this option, the output image noise is always going to be identical (or 
reproducible).
 
 @item -d
 @itemx --doubletype
-Save the output in the double precision floating point format that was
-used internally. This option will be most useful if the input images
-were of integer types.
+Save the output in the double precision floating point format that was used 
internally.
+This option will be most useful if the input images were of integer types.
 
 @end table
 
@@ -18374,14 +16384,9 @@ were of integer types.
 @node High-level calculations, Library, Modeling and fittings, Top
 @chapter High-level calculations
 
-After the reduction of raw data (for example with the programs in @ref{Data
-manipulation}) you will have reduced images/data ready for
-processing/analyzing (for example with the programs in @ref{Data
-analysis}). But the processed/analyzed data (or catalogs) are still not
-enough to derive any scientific result. Even higher-level analysis is still
-needed to convert the observed magnitudes, sizes or volumes into physical
-quantities that we associate with each catalog entry or detected object
-which is the purpose of the tools in this section.
+After the reduction of raw data (for example with the programs in @ref{Data 
manipulation}) you will have reduced images/data ready for processing/analyzing 
(for example with the programs in @ref{Data analysis}).
+But the processed/analyzed data (or catalogs) are still not enough to derive 
any scientific result.
+Even higher-level analysis is still needed to convert the observed magnitudes, 
sizes or volumes into physical quantities that we associate with each catalog 
entry or detected object which is the purpose of the tools in this section.
 
 
 
@@ -18394,23 +16399,14 @@ which is the purpose of the tools in this section.
 @node CosmicCalculator,  , High-level calculations, High-level calculations
 @section CosmicCalculator
 
-To derive higher-level information regarding our sources in
-extra-galactic astronomy, cosmological calculations are necessary. In
-Gnuastro, CosmicCalculator is in charge of such calculations. Before
-discussing how CosmicCalculator is called and operates (in
-@ref{Invoking astcosmiccal}), it is important to provide a rough but
-mostly self sufficient review of the basics and the equations used in
-the analysis. In @ref{Distance on a 2D curved space} the basic idea of
-understanding distances in a curved and expanding 2D universe (which
-we can visualize) are reviewed. Having solidified the concepts there,
-in @ref{Extending distance concepts to 3D}, the formalism is extended
-to the 3D universe we are trying to study in our research.
-
-The focus here is obtaining a physical insight into these equations
-(mainly for the use in real observational studies). There are many
-books thoroughly deriving and proving all the equations with all
-possible initial conditions and assumptions for any abstract universe,
-interested readers can study those books.
+To derive higher-level information regarding our sources in extra-galactic 
astronomy, cosmological calculations are necessary.
+In Gnuastro, CosmicCalculator is in charge of such calculations.
+Before discussing how CosmicCalculator is called and operates (in 
@ref{Invoking astcosmiccal}), it is important to provide a rough but mostly 
self sufficient review of the basics and the equations used in the analysis.
+In @ref{Distance on a 2D curved space} the basic idea of understanding 
distances in a curved and expanding 2D universe (which we can visualize) are 
reviewed.
+Having solidified the concepts there, in @ref{Extending distance concepts to 
3D}, the formalism is extended to the 3D universe we are trying to study in our 
research.
+
+The focus here is obtaining a physical insight into these equations (mainly 
for the use in real observational studies).
+There are many books thoroughly deriving and proving all the equations with 
all possible initial conditions and assumptions for any abstract universe, 
interested readers can study those books.
 
 @menu
 * Distance on a 2D curved space::  Distances in 2D for simplicity
@@ -18421,49 +16417,29 @@ interested readers can study those books.
 @node Distance on a 2D curved space, Extending distance concepts to 3D, 
CosmicCalculator, CosmicCalculator
 @subsection Distance on a 2D curved space
 
-The observations to date (for example the Planck 2015 results), have not
-measured@footnote{The observations are interpreted under the assumption of
-uniform curvature. For a relativistic alternative to dark energy (and maybe
-also some part of dark matter), non-uniform curvature may be even be more
-critical, but that is beyond the scope of this brief explanation.} the
-presence of significant curvature in the universe. However to be generic
-(and allow its measurement if it does in fact exist), it is very important
-to create a framework that allows non-zero uniform curvature. However, this
-section is not intended to be a fully thorough and mathematically complete
-derivation of these concepts. There are many references available for such
-reviews that go deep into the abstract mathematical proofs. The emphasis
-here is on visualization of the concepts for a beginner.
-
-As 3D beings, it is difficult for us to mentally create (visualize) a
-picture of the curvature of a 3D volume. Hence, here we will assume a 2D
-surface/space and discuss distances on that 2D surface when it is flat and
-when it is curved. Once the concepts have been created/visualized here, we
-will extend them, in @ref{Extending distance concepts to 3D}, to a real 3D
-spatial @emph{slice} of the Universe we live in and hope to study.
-
-To be more understandable (actively discuss from an observer's point of
-view) let's assume there's an imaginary 2D creature living on the 2D space
-(which @emph{might} be curved in 3D). Here, we will be working with this
-creature in its efforts to analyze distances in its 2D universe. The start
-of the analysis might seem too mundane, but since it is difficult to
-imagine a 3D curved space, it is important to review all the very basic
-concepts thoroughly for an easy transition to a universe that is more
-difficult to visualize (a curved 3D space embedded in 4D).
-
-To start, let's assume a static (not expanding or shrinking), flat 2D
-surface similar to @ref{flatplane} and that the 2D creature is observing
-its universe from point @mymath{A}. One of the most basic ways to
-parameterize this space is through the Cartesian coordinates (@mymath{x},
-@mymath{y}). In @ref{flatplane}, the basic axes of these two coordinates
-are plotted. An infinitesimal change in the direction of each axis is
-written as @mymath{dx} and @mymath{dy}. For each point, the infinitesimal
-changes are parallel with the respective axes and are not shown for
-clarity. Another very useful way of parameterizing this space is through
-polar coordinates. For each point, we define a radius (@mymath{r}) and
-angle (@mymath{\phi}) from a fixed (but arbitrary) reference axis. In
-@ref{flatplane} the infinitesimal changes for each polar coordinate are
-plotted for a random point and a dashed circle is shown for all points with
-the same radius.
+The observations to date (for example the Planck 2015 results), have not 
measured@footnote{The observations are interpreted under the assumption of 
uniform curvature.
+For a relativistic alternative to dark energy (and maybe also some part of 
dark matter), non-uniform curvature may be even be more critical, but that is 
beyond the scope of this brief explanation.} the presence of significant 
curvature in the universe.
+However to be generic (and allow its measurement if it does in fact exist), it 
is very important to create a framework that allows non-zero uniform curvature.
+However, this section is not intended to be a fully thorough and 
mathematically complete derivation of these concepts.
+There are many references available for such reviews that go deep into the 
abstract mathematical proofs.
+The emphasis here is on visualization of the concepts for a beginner.
+
+As 3D beings, it is difficult for us to mentally create (visualize) a picture 
of the curvature of a 3D volume.
+Hence, here we will assume a 2D surface/space and discuss distances on that 2D 
surface when it is flat and when it is curved.
+Once the concepts have been created/visualized here, we will extend them, in 
@ref{Extending distance concepts to 3D}, to a real 3D spatial @emph{slice} of 
the Universe we live in and hope to study.
+
+To be more understandable (actively discuss from an observer's point of view) 
let's assume there's an imaginary 2D creature living on the 2D space (which 
@emph{might} be curved in 3D).
+Here, we will be working with this creature in its efforts to analyze 
distances in its 2D universe.
+The start of the analysis might seem too mundane, but since it is difficult to 
imagine a 3D curved space, it is important to review all the very basic 
concepts thoroughly for an easy transition to a universe that is more difficult 
to visualize (a curved 3D space embedded in 4D).
+
+To start, let's assume a static (not expanding or shrinking), flat 2D surface 
similar to @ref{flatplane} and that the 2D creature is observing its universe 
from point @mymath{A}.
+One of the most basic ways to parameterize this space is through the Cartesian 
coordinates (@mymath{x}, @mymath{y}).
+In @ref{flatplane}, the basic axes of these two coordinates are plotted.
+An infinitesimal change in the direction of each axis is written as 
@mymath{dx} and @mymath{dy}.
+For each point, the infinitesimal changes are parallel with the respective 
axes and are not shown for clarity.
+Another very useful way of parameterizing this space is through polar 
coordinates.
+For each point, we define a radius (@mymath{r}) and angle (@mymath{\phi}) from 
a fixed (but arbitrary) reference axis.
+In @ref{flatplane} the infinitesimal changes for each polar coordinate are 
plotted for a random point and a dashed circle is shown for all points with the 
same radius.
 
 @float Figure,flatplane
 @center@image{gnuastro-figures/flatplane, 10cm, , }
@@ -18472,122 +16448,78 @@ the same radius.
 plane.}
 @end float
 
-Assuming an object is placed at a certain position, which can be
-parameterized as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general
-infinitesimal change in its position will place it in the coordinates
-@mymath{(x+dx,y+dy)} and @mymath{(r+dr,\phi+d\phi)}. The distance (on the
-flat 2D surface) that is covered by this infinitesimal change in the static
-universe (@mymath{ds_s}, the subscript signifies the static nature of this
-universe) can be written as:
+Assuming an object is placed at a certain position, which can be parameterized 
as @mymath{(x,y)}, or @mymath{(r,\phi)}, a general infinitesimal change in its 
position will place it in the coordinates @mymath{(x+dx,y+dy)} and 
@mymath{(r+dr,\phi+d\phi)}.
+The distance (on the flat 2D surface) that is covered by this infinitesimal 
change in the static universe (@mymath{ds_s}, the subscript signifies the 
static nature of this universe) can be written as:
 
 @dispmath{ds_s=dx^2+dy^2=dr^2+r^2d\phi^2}
 
-The main question is this: how can the 2D creature incorporate the
-(possible) curvature in its universe when it's calculating distances? The
-universe that it lives in might equally be a curved surface like
-@ref{sphereandplane}. The answer to this question but for a 3D being (us)
-is the whole purpose to this discussion. Here, we want to give the 2D
-creature (and later, ourselves) the tools to measure distances if the space
-(that hosts the objects) is curved.
-
-@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as
-the curved 2D plane for simplicity. The 2D plane is tangent to the
-spherical shell and only touches it at @mymath{A}. This idea will be
-generalized later. The first step in measuring the distance in a curved
-space is to imagine a third dimension along the @mymath{z} axis as shown in
-@ref{sphereandplane}. For simplicity, the @mymath{z} axis is assumed to
-pass through the center of the spherical shell. Our imaginary 2D creature
-cannot visualize the third dimension or a curved 2D surface within it, so
-the remainder of this discussion is purely abstract for it (similar to us
-having difficulty in visualizing a 3D curved space in 4D). But since we are
-3D creatures, we have the advantage of visualizing the following
-steps. Fortunately the 2D creature is already familiar with our
-mathematical constructs, so it can follow our reasoning.
-
-With the third axis added, a generic infinitesimal change over @emph{the
-full} 3D space corresponds to the distance:
+The main question is this: how can the 2D creature incorporate the (possible) 
curvature in its universe when it's calculating distances? The universe that it 
lives in might equally be a curved surface like @ref{sphereandplane}.
+The answer to this question but for a 3D being (us) is the whole purpose to 
this discussion.
+Here, we want to give the 2D creature (and later, ourselves) the tools to 
measure distances if the space (that hosts the objects) is curved.
+
+@ref{sphereandplane} assumes a spherical shell with radius @mymath{R} as the 
curved 2D plane for simplicity.
+The 2D plane is tangent to the spherical shell and only touches it at 
@mymath{A}.
+This idea will be generalized later.
+The first step in measuring the distance in a curved space is to imagine a 
third dimension along the @mymath{z} axis as shown in @ref{sphereandplane}.
+For simplicity, the @mymath{z} axis is assumed to pass through the center of 
the spherical shell.
+Our imaginary 2D creature cannot visualize the third dimension or a curved 2D 
surface within it, so the remainder of this discussion is purely abstract for 
it (similar to us having difficulty in visualizing a 3D curved space in 4D).
+But since we are 3D creatures, we have the advantage of visualizing the 
following steps.
+Fortunately the 2D creature is already familiar with our mathematical 
constructs, so it can follow our reasoning.
+
+With the third axis added, a generic infinitesimal change over @emph{the full} 
3D space corresponds to the distance:
 
 @dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2d\phi^2+dz^2.}
 
 @float Figure,sphereandplane
 @center@image{gnuastro-figures/sphereandplane, 10cm, , }
 
-@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light
-gray) tangent to it at point @mymath{A}.}
+@caption{2D spherical shell (centered on @mymath{O}) and flat plane (light 
gray) tangent to it at point @mymath{A}.}
 @end float
 
-It is very important to recognize that this change of distance is for
-@emph{any} point in the 3D space, not just those changes that occur on the
-2D spherical shell of @ref{sphereandplane}. Recall that our 2D friend can
-only do measurements on the 2D surfaces, not the full 3D space. So we have
-to constrain this general change to any change on the 2D spherical
-shell. To do that, let's look at the arbitrary point @mymath{P} on the 2D
-spherical shell. Its image (@mymath{P'}) on the flat plain is also
-displayed. From the dark gray triangle, we see that
-
-@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These
-relations allow the 2D creature to find the value of @mymath{z} (an
-abstract dimension for it) as a function of r (distance on a flat 2D plane,
-which it can visualize) and thus eliminate @mymath{z}. From
-@mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and
-solving for @mymath{z}, we find:
+It is very important to recognize that this change of distance is for 
@emph{any} point in the 3D space, not just those changes that occur on the 2D 
spherical shell of @ref{sphereandplane}.
+Recall that our 2D friend can only do measurements on the 2D surfaces, not the 
full 3D space.
+So we have to constrain this general change to any change on the 2D spherical 
shell.
+To do that, let's look at the arbitrary point @mymath{P} on the 2D spherical 
shell.
+Its image (@mymath{P'}) on the flat plain is also displayed. From the dark 
gray triangle, we see that
+
+@dispmath{\sin\theta={r\over R},\quad\cos\theta={R-z\over R}.}These relations 
allow the 2D creature to find the value of @mymath{z} (an abstract dimension 
for it) as a function of r (distance on a flat 2D plane, which it can 
visualize) and thus eliminate @mymath{z}.
+From @mymath{\sin^2\theta+\cos^2\theta=1}, we get @mymath{z^2-2Rz+r^2=0} and 
solving for @mymath{z}, we find:
 
 @dispmath{z=R\left(1\pm\sqrt{1-{r^2\over R^2}}\right).}
 
-The @mymath{\pm} can be understood from @ref{sphereandplane}: For each
-@mymath{r}, there are two points on the sphere, one in the upper hemisphere
-and one in the lower hemisphere. An infinitesimal change in @mymath{r},
-will create the following infinitesimal change in @mymath{z}:
+The @mymath{\pm} can be understood from @ref{sphereandplane}: For each 
@mymath{r}, there are two points on the sphere, one in the upper hemisphere and 
one in the lower hemisphere.
+An infinitesimal change in @mymath{r}, will create the following infinitesimal 
change in @mymath{z}:
 
 @dispmath{dz={\mp r\over R}\left(1\over
-\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation
-instead of @mymath{dz} in the @mymath{ds_s^2} equation above, we get:
+\sqrt{1-{r^2/R^2}}\right)dr.}Using the positive signed equation instead of 
@mymath{dz} in the @mymath{ds_s^2} equation above, we get:
 
 @dispmath{ds_s^2={dr^2\over 1-r^2/R^2}+r^2d\phi^2.}
 
-The derivation above was done for a spherical shell of radius @mymath{R} as
-a curved 2D surface. To generalize it to any surface, we can define
-@mymath{K=1/R^2} as the curvature parameter. Then the general infinitesimal
-change in a static universe can be written as:
+The derivation above was done for a spherical shell of radius @mymath{R} as a 
curved 2D surface.
+To generalize it to any surface, we can define @mymath{K=1/R^2} as the 
curvature parameter.
+Then the general infinitesimal change in a static universe can be written as:
 
 @dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2d\phi^2.}
 
-Therefore, when @mymath{K>0} (and curvature is the same everywhere), we
-have a finite universe, where @mymath{r} cannot become larger than
-@mymath{R} as in @ref{sphereandplane}. When @mymath{K=0}, we have a flat
-plane (@ref{flatplane}) and a negative @mymath{K} will correspond to an
-imaginary @mymath{R}. The latter two cases may be infinite in area (which
-is not a simple concept, but mathematically can be modeled with @mymath{r}
-extending infinitely), or finite-area (like a cylinder is flat everywhere
-with @mymath{ds_s^2={dx^2 + dy^2}}, but finite in one direction in size).
+Therefore, when @mymath{K>0} (and curvature is the same everywhere), we have a 
finite universe, where @mymath{r} cannot become larger than @mymath{R} as in 
@ref{sphereandplane}.
+When @mymath{K=0}, we have a flat plane (@ref{flatplane}) and a negative 
@mymath{K} will correspond to an imaginary @mymath{R}.
+The latter two cases may be infinite in area (which is not a simple concept, 
but mathematically can be modeled with @mymath{r} extending infinitely), or 
finite-area (like a cylinder is flat everywhere with @mymath{ds_s^2={dx^2 + 
dy^2}}, but finite in one direction in size).
 
 @cindex Proper distance
-A very important issue that can be discussed now (while we are still in 2D
-and can actually visualize things) is that @mymath{\overrightarrow{r}} is
-tangent to the curved space at the observer's position. In other words, it
-is on the gray flat surface of @ref{sphereandplane}, even when the universe
-if curved: @mymath{\overrightarrow{r}=P'-A}. Therefore for the point
-@mymath{P} on a curved space, the raw coordinate @mymath{r} is the distance
-to @mymath{P'}, not @mymath{P}. The distance to the point @mymath{P} (at a
-specific coordinate @mymath{r} on the flat plane) over the curved surface
-(thick line in @ref{sphereandplane}) is called the @emph{proper distance}
-and is displayed with @mymath{l}. For the specific example of
-@ref{sphereandplane}, the proper distance can be calculated with:
-@mymath{l=R\theta} (@mymath{\theta} is in radians). using the
-@mymath{\sin\theta} relation found above, we can find @mymath{l} as a
-function of @mymath{r}:
+A very important issue that can be discussed now (while we are still in 2D and 
can actually visualize things) is that @mymath{\overrightarrow{r}} is tangent 
to the curved space at the observer's position.
+In other words, it is on the gray flat surface of @ref{sphereandplane}, even 
when the universe if curved: @mymath{\overrightarrow{r}=P'-A}.
+Therefore for the point @mymath{P} on a curved space, the raw coordinate 
@mymath{r} is the distance to @mymath{P'}, not @mymath{P}.
+The distance to the point @mymath{P} (at a specific coordinate @mymath{r} on 
the flat plane) over the curved surface (thick line in @ref{sphereandplane}) is 
called the @emph{proper distance} and is displayed with @mymath{l}.
+For the specific example of @ref{sphereandplane}, the proper distance can be 
calculated with: @mymath{l=R\theta} (@mymath{\theta} is in radians).
+Using the @mymath{\sin\theta} relation found above, we can find @mymath{l} as 
a function of @mymath{r}:
 
 @dispmath{\theta=\sin^{-1}\left({r\over R}\right)\quad\rightarrow\quad
 l(r)=R\sin^{-1}\left({r\over R}\right)}
 
 
-@mymath{R} is just an arbitrary constant and can be directly found from
-@mymath{K}, so for cleaner equations, it is common practice to set
-@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}. Also note that when
-@mymath{R=1}, then @mymath{l=\theta}. Generally, depending on the
-curvature, in a @emph{static} universe the proper distance can be written
-as a function of the coordinate @mymath{r} as (from now on we are assuming
-@mymath{R=1}):
+@mymath{R} is just an arbitrary constant and can be directly found from 
@mymath{K}, so for cleaner equations, it is common practice to set 
@mymath{R=1}, which gives: @mymath{l(r)=\sin^{-1}r}.
+Also note that when @mymath{R=1}, then @mymath{l=\theta}.
+Generally, depending on the curvature, in a @emph{static} universe the proper 
distance can be written as a function of the coordinate @mymath{r} as (from now 
on we are assuming @mymath{R=1}):
 
 @dispmath{l(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
 l(r)=r\quad(K=0),\quad\quad l(r)=\sinh^{-1}(r)\quad(K<0).}With
@@ -18597,56 +16529,32 @@ more simpler and abstract form of
 @dispmath{ds_s^2=dl^2+r^2d\phi^2.}
 
 @cindex Comoving distance
-Until now, we had assumed a static universe (not changing with time). But
-our observations so far appear to indicate that the universe is expanding
-(it isn't static). Since there is no reason to expect the observed
-expansion is unique to our particular position of the universe, we expect
-the universe to be expanding at all points with the same rate at the same
-time. Therefore, to add a time dependence to our distance measurements, we
-can include a multiplicative scaling factor, which is a function of time:
-@mymath{a(t)}. The functional form of @mymath{a(t)} comes from the
-cosmology, the physics we assume for it: general relativity, and the choice
-of whether the universe is uniform (`homogeneous') in density and curvature
-or inhomogeneous. In this section, the functional form of @mymath{a(t)} is
-irrelevant, so we can avoid these issues.
-
-With this scaling factor, the proper distance will also depend on time. As
-the universe expands, the distance between two given points will shift to
-larger values. We thus define a distance measure, or coordinate, that is
-independent of time and thus doesn't `move'. We call it the @emph{comoving
-distance} and display with @mymath{\chi} such that:
-@mymath{l(r,t)=\chi(r)a(t)}.  We have therefore, shifted the @mymath{r}
-dependence of the proper distance we derived above for a static universe to
-the comoving distance:
+Until now, we had assumed a static universe (not changing with time).
+But our observations so far appear to indicate that the universe is expanding 
(it isn't static).
+Since there is no reason to expect the observed expansion is unique to our 
particular position of the universe, we expect the universe to be expanding at 
all points with the same rate at the same time.
+Therefore, to add a time dependence to our distance measurements, we can 
include a multiplicative scaling factor, which is a function of time: 
@mymath{a(t)}.
+The functional form of @mymath{a(t)} comes from the cosmology, the physics we 
assume for it: general relativity, and the choice of whether the universe is 
uniform (`homogeneous') in density and curvature or inhomogeneous.
+In this section, the functional form of @mymath{a(t)} is irrelevant, so we can 
avoid these issues.
+
+With this scaling factor, the proper distance will also depend on time.
+As the universe expands, the distance between two given points will shift to 
larger values.
+We thus define a distance measure, or coordinate, that is independent of time 
and thus doesn't `move'.
+We call it the @emph{comoving distance} and display with @mymath{\chi} such 
that: @mymath{l(r,t)=\chi(r)a(t)}.
+We have therefore, shifted the @mymath{r} dependence of the proper distance we 
derived above for a static universe to the comoving distance:
 
 @dispmath{\chi(r)=\sin^{-1}(r)\quad(K>0),\quad\quad
 \chi(r)=r\quad(K=0),\quad\quad \chi(r)=\sinh^{-1}(r)\quad(K<0).}
 
-Therefore, @mymath{\chi(r)} is the proper distance to an object at a
-specific reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies
-``reference'') when @mymath{a(t_r)=1}. At any arbitrary moment
-(@mymath{t\neq{t_r}}) before or after @mymath{t_r}, the proper distance to
-the object can be scaled with @mymath{a(t)}.
-
-Measuring the change of distance in a time-dependent (expanding) universe
-only makes sense if we can add up space and time@footnote{In other words,
-making our space-time consistent with Minkowski space-time geometry. In this
-geometry, different observers at a given point (event) in space-time split
-up space-time into `space' and `time' in different ways, just like people at
-the same spatial position can make different choices of splitting up a map
-into `left--right' and `up--down'. This model is well supported by
-twentieth and twenty-first century observations.}. But we can only add bits
-of space and time together if we measure them in the same units: with a
-conversion constant (similar to how 1000 is used to convert a kilometer
-into meters).  Experimentally, we find strong support for the hypothesis
-that this conversion constant is the speed of light (or gravitational
-waves@footnote{The speed of gravitational waves was recently found to be
-very similar to that of light in vacuum, see
-@url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a
-vacuum. This speed is postulated to be constant@footnote{In @emph{natural
-units}, speed is measured in units of the speed of light in vacuum.} and is
-almost always written as @mymath{c}. We can thus parameterize the change in
-distance on an expanding 2D surface as
+Therefore, @mymath{\chi(r)} is the proper distance to an object at a specific 
reference time: @mymath{t=t_r} (the @mymath{r} subscript signifies 
``reference'') when @mymath{a(t_r)=1}.
+At any arbitrary moment (@mymath{t\neq{t_r}}) before or after @mymath{t_r}, 
the proper distance to the object can be scaled with @mymath{a(t)}.
+
+Measuring the change of distance in a time-dependent (expanding) universe only 
makes sense if we can add up space and time@footnote{In other words, making our 
space-time consistent with Minkowski space-time geometry.
+In this geometry, different observers at a given point (event) in space-time 
split up space-time into `space' and `time' in different ways, just like people 
at the same spatial position can make different choices of splitting up a map 
into `left--right' and `up--down'.
+This model is well supported by twentieth and twenty-first century 
observations.}.
+But we can only add bits of space and time together if we measure them in the 
same units: with a conversion constant (similar to how 1000 is used to convert 
a kilometer into meters).
+Experimentally, we find strong support for the hypothesis that this conversion 
constant is the speed of light (or gravitational waves@footnote{The speed of 
gravitational waves was recently found to be very similar to that of light in 
vacuum, see @url{https://arxiv.org/abs/1710.05834, arXiv:1710.05834}.}) in a 
vacuum.
+This speed is postulated to be constant@footnote{In @emph{natural units}, 
speed is measured in units of the speed of light in vacuum.} and is almost 
always written as @mymath{c}.
+We can thus parameterize the change in distance on an expanding 2D surface as
 
 @dispmath{ds^2=c^2dt^2-a^2(t)ds_s^2 = c^2dt^2-a^2(t)(d\chi^2+r^2d\phi^2).}
 
@@ -18654,33 +16562,25 @@ distance on an expanding 2D surface as
 @node Extending distance concepts to 3D, Invoking astcosmiccal, Distance on a 
2D curved space, CosmicCalculator
 @subsection Extending distance concepts to 3D
 
-The concepts of @ref{Distance on a 2D curved space} are here extended to a
-3D space that @emph{might} be curved. We can start with the generic
-infinitesimal distance in a static 3D universe, but this time in spherical
-coordinates instead of polar coordinates.  @mymath{\theta} is shown in
-@ref{sphereandplane}, but here we are 3D beings, positioned on @mymath{O}
-(the center of the sphere) and the point @mymath{O} is tangent to a
-4D-sphere. In our 3D space, a generic infinitesimal displacement will
-correspond to the following distance in spherical coordinates:
+The concepts of @ref{Distance on a 2D curved space} are here extended to a 3D 
space that @emph{might} be curved.
+We can start with the generic infinitesimal distance in a static 3D universe, 
but this time in spherical coordinates instead of polar coordinates.
+@mymath{\theta} is shown in @ref{sphereandplane}, but here we are 3D beings, 
positioned on @mymath{O} (the center of the sphere) and the point @mymath{O} is 
tangent to a 4D-sphere.
+In our 3D space, a generic infinitesimal displacement will correspond to the 
following distance in spherical coordinates:
 
 @dispmath{ds_s^2=dx^2+dy^2+dz^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
 
-Like the 2D creature before, we now have to assume an abstract dimension
-which we cannot visualize easily. Let's call the fourth dimension
-@mymath{w}, then the general change in coordinates in the @emph{full} four
-dimensional space will be:
+Like the 2D creature before, we now have to assume an abstract dimension which 
we cannot visualize easily.
+Let's call the fourth dimension @mymath{w}, then the general change in 
coordinates in the @emph{full} four dimensional space will be:
 
 @dispmath{ds_s^2=dr^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)+dw^2.}
 
 @noindent
-But we can only work on a 3D curved space, so following exactly the same
-steps and conventions as our 2D friend, we arrive at:
+But we can only work on a 3D curved space, so following exactly the same steps 
and conventions as our 2D friend, we arrive at:
 
 @dispmath{ds_s^2={dr^2\over 1-Kr^2}+r^2(d\theta^2+\sin^2{\theta}d\phi^2).}
 
 @noindent
-In a non-static universe (with a scale factor a(t)), the distance can be
-written as:
+In a non-static universe (with a scale factor a(t)), the distance can be 
written as:
 
 @dispmath{ds^2=c^2dt^2-a^2(t)[d\chi^2+r^2(d\theta^2+\sin^2{\theta}d\phi^2)].}
 
@@ -18711,9 +16611,8 @@ written as:
 @node Invoking astcosmiccal,  , Extending distance concepts to 3D, 
CosmicCalculator
 @subsection Invoking CosmicCalculator
 
-CosmicCalculator will calculate cosmological variables based on the
-input parameters. The executable name is @file{astcosmiccal} with the
-following general template
+CosmicCalculator will calculate cosmological variables based on the input 
parameters.
+The executable name is @file{astcosmiccal} with the following general template
 
 @example
 $ astcosmiccal [OPTION...] ...
@@ -18743,21 +16642,16 @@ $ astcosmiccal -z0.4 -LAg
 $ astcosmiccal -l0.7 -m0.3 -z2.1
 @end example
 
-The input parameters (for example current matter density, etc) can be
-given as command-line options or in the configuration files, see
-@ref{Configuration files}. For a definition of the different parameters,
-please see the sections prior to this. If no redshift is given,
-CosmicCalculator will just print its input parameters and abort. For a full
-list of the input options, please see @ref{CosmicCalculator input options}.
+The input parameters (for example current matter density, etc) can be given as 
command-line options or in the configuration files, see @ref{Configuration 
files}.
+For a definition of the different parameters, please see the sections prior to 
this.
+If no redshift is given, CosmicCalculator will just print its input parameters 
and abort.
+For a full list of the input options, please see @ref{CosmicCalculator input 
options}.
 
-When only a redshift is given, CosmicCalculator will print all calculations
-(one per line) with some explanations before each. This can be good when
-you want a general feeling of the conditions at a specific
-redshift. Alternatively, if any specific calculations are requested, only
-the requested values will be calculated and printed with one character
-space between them. In this case, no description will be printed. See
-@ref{CosmicCalculator specific calculations} for the full list of these
-options along with some explanations how when/how they can be useful.
+When only a redshift is given, CosmicCalculator will print all calculations 
(one per line) with some explanations before each.
+This can be good when you want a general feeling of the conditions at a 
specific redshift.
+Alternatively, if any specific calculations are requested, only the requested 
values will be calculated and printed with one character space between them.
+In this case, no description will be printed.
+See @ref{CosmicCalculator specific calculations} for the full list of these 
options along with some explanations how when/how they can be useful.
 
 @menu
 * CosmicCalculator input options::  Options to specify input conditions.
@@ -18780,33 +16674,28 @@ Current expansion rate (in km sec@mymath{^{-1}} 
Mpc@mymath{^{-1}}).
 
 @item -l FLT
 @itemx --olambda=FLT
-Cosmological constant density divided by the critical density in the
-current Universe (@mymath{\Omega_{\Lambda,0}}).
+Cosmological constant density divided by the critical density in the current 
Universe (@mymath{\Omega_{\Lambda,0}}).
 
 @item -m FLT
 @itemx --omatter=FLT
-Matter (including massive neutrinos) density divided by the critical
-density in the current Universe (@mymath{\Omega_{m,0}}).
+Matter (including massive neutrinos) density divided by the critical density 
in the current Universe (@mymath{\Omega_{m,0}}).
 
 @item -r FLT
 @itemx --oradiation=FLT
-Radiation density divided by the critical density in the current Universe
-(@mymath{\Omega_{r,0}}).
+Radiation density divided by the critical density in the current Universe 
(@mymath{\Omega_{r,0}}).
 
 @item -O STR/FLT,FLT
 @itemx --obsline=STR/FLT,FLT
 @cindex Rest-frame wavelength
 @cindex Wavelength, rest-frame
-Find the redshift to use in next steps based on the rest-frame and observed
-wavelengths of a line. Wavelengths are assumed to be in Angstroms. The
-first argument identifies the line. It can be one of the standard names
-below, or any rest-frame wavelength in Angstroms. The second argument is
-the observed wavelength of that line. For example
-@option{--obsline=lyalpha,6000} is the same as
-@option{--obsline=1215.64,6000}.
+Find the redshift to use in next steps based on the rest-frame and observed 
wavelengths of a line.
+Wavelengths are assumed to be in Angstroms.
+The first argument identifies the line.
+It can be one of the standard names below, or any rest-frame wavelength in 
Angstroms.
+The second argument is the observed wavelength of that line.
+For example @option{--obsline=lyalpha,6000} is the same as 
@option{--obsline=1215.64,6000}.
 
-The accepted names are listed below, sorted from red (longer wavelength) to
-blue (shorter wavelength).
+The accepted names are listed below, sorted from red (longer wavelength) to 
blue (shorter wavelength).
 
 @table @code
 @item siired
@@ -18924,30 +16813,20 @@ blue (shorter wavelength).
 
 @node CosmicCalculator specific calculations,  , CosmicCalculator input 
options, Invoking astcosmiccal
 @subsubsection CosmicCalculator specific calculations
-By default, when no specific calculations are requested, CosmicCalculator
-will print a complete set of all its calculators (one line for each
-calculation, see @ref{Invoking astcosmiccal}). The full list of
-calculations can be useful when you don't want any specific value, but just
-a general view. In other contexts (for example in a batch script or during
-a discussion), you know exactly what you want and don't want to be
-distracted by all the extra information.
-
-You can use any number of the options described below in any order. When
-any of these options are requested, CosmicCalculator's output will just be
-a single line with a single space between the (possibly) multiple
-values. In the example below, only the tangential distance along one
-arcsecond (in kpc), absolute magnitude conversion, and age of the universe
-at redshift 2 are printed (recall that you can merge short options
-together, see @ref{Options}).
+By default, when no specific calculations are requested, CosmicCalculator will 
print a complete set of all its calculators (one line for each calculation, see 
@ref{Invoking astcosmiccal}).
+The full list of calculations can be useful when you don't want any specific 
value, but just a general view.
+In other contexts (for example in a batch script or during a discussion), you 
know exactly what you want and don't want to be distracted by all the extra 
information.
+
+You can use any number of the options described below in any order.
+When any of these options are requested, CosmicCalculator's output will just 
be a single line with a single space between the (possibly) multiple values.
+In the example below, only the tangential distance along one arcsecond (in 
kpc), absolute magnitude conversion, and age of the universe at redshift 2 are 
printed (recall that you can merge short options together, see @ref{Options}).
 
 @example
 $ astcosmiccal -z2 -sag
 8.585046 44.819248 3.289979
 @end example
 
-Here is one example of using this feature in scripts: by adding the
-following two lines in a script to keep/use the comoving volume with
-varying redshifts:
+Here is one example of using this feature in scripts: by adding the following 
two lines in a script to keep/use the comoving volume with varying redshifts:
 
 @example
 z=3.12
@@ -18956,75 +16835,56 @@ vol=$(astcosmiccal --redshift=$z --volume)
 
 @cindex GNU Grep
 @noindent
-In a script, this operation might be necessary for a large number of
-objects (several of galaxies in a catalog for example). So the fact that
-all the other default calculations are ignored will also help you get to
-your result faster.
-
-If you are indeed dealing with many (for example thousands) of redshifts,
-using CosmicCalculator is not the best/fastest solution. Because it has to
-go through all the configuration files and preparations for each
-invocation. To get the best efficiency (least overhead), we recommend using
-Gnuastro's cosmology library (see @ref{Cosmology
-library}). CosmicCalculator also calls the library functions defined there
-for its calculations, so you get the same result with no overhead. Gnuastro
-also has libraries for easily reading tables into a C program, see
-@ref{Table input output}. Afterwards, you can easily build and run your C
-program for the particular processing with @ref{BuildProgram}.
-
-If you just want to inspect the value of a variable visually, the
-description (which comes with units) might be more useful. In such cases,
-the following command might be better. The other calculations will also be
-done, but they are so fast that you will not notice on modern computers
-(the time it takes your eye to focus on the result is usually longer than
-the processing: a fraction of a second).
+In a script, this operation might be necessary for a large number of objects 
(several of galaxies in a catalog for example).
+So the fact that all the other default calculations are ignored will also help 
you get to your result faster.
+
+If you are indeed dealing with many (for example thousands) of redshifts, 
using CosmicCalculator is not the best/fastest solution.
+Because it has to go through all the configuration files and preparations for 
each invocation.
+To get the best efficiency (least overhead), we recommend using Gnuastro's 
cosmology library (see @ref{Cosmology library}).
+CosmicCalculator also calls the library functions defined there for its 
calculations, so you get the same result with no overhead.
+Gnuastro also has libraries for easily reading tables into a C program, see 
@ref{Table input output}.
+Afterwards, you can easily build and run your C program for the particular 
processing with @ref{BuildProgram}.
+
+If you just want to inspect the value of a variable visually, the description 
(which comes with units) might be more useful.
+In such cases, the following command might be better.
+The other calculations will also be done, but they are so fast that you will 
not notice on modern computers (the time it takes your eye to focus on the 
result is usually longer than the processing: a fraction of a second).
 
 @example
 $ astcosmiccal --redshift=0.832 | grep volume
 @end example
 
-The full list of CosmicCalculator's specific calculations is present
-below. In case you have forgot the units, you can use the @option{--help}
-option which has the units along with a short description.
+The full list of CosmicCalculator's specific calculations is present below.
+In case you have forgot the units, you can use the @option{--help} option 
which has the units along with a short description.
 
 @table @option
 
 @item -e
 @itemx --usedredshift
-The redshift that was used in this run. In many cases this is the main
-input parameter to CosmicCalculator, but it is useful in others. For
-example in combination with @option{--obsline} (where you give an observed
-and rest-frame wavelength and would like to know the redshift), or if you
-want to run CosmicCalculator in a loop while changing the redshift and you
-want to keep the redshift value.
+The redshift that was used in this run.
+In many cases this is the main input parameter to CosmicCalculator, but it is 
useful in others.
+For example in combination with @option{--obsline} (where you give an observed 
and rest-frame wavelength and would like to know the redshift), or if you want 
to run CosmicCalculator in a loop while changing the redshift and you want to 
keep the redshift value.
 
 @item -G
 @itemx --agenow
-The current age of the universe (given the input parameters) in Ga (Giga
-annum, or billion years).
+The current age of the universe (given the input parameters) in Ga (Giga 
annum, or billion years).
 
 @item -C
 @itemx --criticaldensitynow
-The current critical density (given the input parameters) in grams per
-centimeter-cube (@mymath{g/cm^3}).
+The current critical density (given the input parameters) in grams per 
centimeter-cube (@mymath{g/cm^3}).
 
 @item -d
 @itemx --properdistance
-The proper distance (at current time) to object at the given redshift in
-Megaparsecs (Mpc). See @ref{Distance on a 2D curved space} for a
-description of the proper distance.
+The proper distance (at current time) to object at the given redshift in 
Megaparsecs (Mpc).
+See @ref{Distance on a 2D curved space} for a description of the proper 
distance.
 
 @item -A
 @itemx --angulardimdist
-The angular diameter distance to object at given redshift in Megaparsecs
-(Mpc).
+The angular diameter distance to object at given redshift in Megaparsecs (Mpc).
 
 @item -s
 @itemx --arcsectandist
-The tangential distance covered by 1 arcseconds at the given redshift in
-kiloparsecs (Kpc). This can be useful when trying to estimate the
-resolution or pixel scale of an instrument (usually in units of arcseconds)
-at a given redshift.
+The tangential distance covered by 1 arcseconds at the given redshift in 
kiloparsecs (Kpc).
+This can be useful when trying to estimate the resolution or pixel scale of an 
instrument (usually in units of arcseconds) at a given redshift.
 
 @item -L
 @itemx --luminositydist
@@ -19036,11 +16896,9 @@ The distance modulus at given redshift.
 
 @item -a
 @itemx --absmagconv
-The conversion factor (addition) to absolute magnitude. Note that this is
-practically the distance modulus added with @mymath{-2.5\log{(1+z)}} for
-the desired redshift based on the input parameters. Once the apparent
-magnitude and redshift of an object is known, this value may be added with
-the apparent magnitude to give the object's absolute magnitude.
+The conversion factor (addition) to absolute magnitude.
+Note that this is practically the distance modulus added with 
@mymath{-2.5\log{(1+z)}} for the desired redshift based on the input parameters.
+Once the apparent magnitude and redshift of an object is known, this value may 
be added with the apparent magnitude to give the object's absolute magnitude.
 
 @item -g
 @itemx --age
@@ -19048,29 +16906,23 @@ Age of the universe at given redshift in Ga (Giga 
annum, or billion years).
 
 @item -b
 @itemx --lookbacktime
-The look-back time to given redshift in Ga (Giga annum, or billion
-years). The look-back time at a given redshift is defined as the current
-age of the universe (@option{--agenow}) subtracted by the age of the
-universe at the given redshift.
+The look-back time to given redshift in Ga (Giga annum, or billion years).
+The look-back time at a given redshift is defined as the current age of the 
universe (@option{--agenow}) subtracted by the age of the universe at the given 
redshift.
 
 @item -c
 @itemx --criticaldensity
-The critical density at given redshift in grams per centimeter-cube
-(@mymath{g/cm^3}).
+The critical density at given redshift in grams per centimeter-cube 
(@mymath{g/cm^3}).
 
 @item -v
 @itemx --onlyvolume
-The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired
-redshift based on the input parameters.
+The comoving volume in Megaparsecs cube (Mpc@mymath{^3}) until the desired 
redshift based on the input parameters.
 
 @item -i STR/FLT
 @itemx --lineatz=STR/FLT
-The wavelength of the specified line at the redshift given to
-CosmicCalculator. The line can be specified either by its name (see
-description of @option{--obsline} in @ref{CosmicCalculator input options}),
-or directly as a number. In the former case (when a name is given), the
-returned number is in units of Angstroms. In the latter (when a number is
-given), the returned value is the same units of the input number.
+The wavelength of the specified line at the redshift given to CosmicCalculator.
+The line can be specified either by its name (see description of 
@option{--obsline} in @ref{CosmicCalculator input options}), or directly as a 
number.
+In the former case (when a name is given), the returned number is in units of 
Angstroms.
+In the latter (when a number is given), the returned value is the same units 
of the input number.
 
 @end table
 
@@ -19098,34 +16950,22 @@ given), the returned value is the same units of the 
input number.
 @node Library, Developing, High-level calculations, Top
 @chapter Library
 
-Each program in Gnuastro that was discussed in the prior chapters (or any
-program in general) is a collection of functions that is compiled into one
-executable file which can communicate directly with the outside world. The
-outside world in this context is the operating system. By communication, we
-mean that control is directly passed to a program from the operating system
-with a (possible) set of inputs and after it is finished, the program will
-pass control back to the operating system. For programs written in C and
-C++, the unique @code{main} function is in charge of this communication.
-
-Similar to a program, a library is also a collection of functions that is
-compiled into one executable file. However, unlike programs, libraries
-don't have a @code{main} function. Therefore they can't communicate
-directly with the outside world. This gives you the chance to write your
-own @code{main} function and call library functions from within it. After
-compiling your program into a binary executable, you just have to
-@emph{link} it to the library and you are ready to run (execute) your
-program. In this way, you can use Gnuastro at a much lower-level, and in
-combination with other libraries on your system, you can significantly
-boost your creativity.
-
-This chapter starts with a basic introduction to libraries and how you can
-use them in @ref{Review of library fundamentals}. The separate functions in
-the Gnuastro library are then introduced (classified by context) in
-@ref{Gnuastro library}. If you end up routinely using a fixed set of library
-functions, with a well-defined input and output, it will be much more
-beneficial if you define a program for the job. Therefore, in its
-@ref{Version controlled source}, Gnuastro comes with the @ref{The TEMPLATE
-program} to easily define your own programs(s).
+Each program in Gnuastro that was discussed in the prior chapters (or any 
program in general) is a collection of functions that is compiled into one 
executable file which can communicate directly with the outside world.
+The outside world in this context is the operating system.
+By communication, we mean that control is directly passed to a program from 
the operating system with a (possible) set of inputs and after it is finished, 
the program will pass control back to the operating system.
+For programs written in C and C++, the unique @code{main} function is in 
charge of this communication.
+
+Similar to a program, a library is also a collection of functions that is 
compiled into one executable file.
+However, unlike programs, libraries don't have a @code{main} function.
+Therefore they can't communicate directly with the outside world.
+This gives you the chance to write your own @code{main} function and call 
library functions from within it.
+After compiling your program into a binary executable, you just have to 
@emph{link} it to the library and you are ready to run (execute) your program.
+In this way, you can use Gnuastro at a much lower-level, and in combination 
with other libraries on your system, you can significantly boost your 
creativity.
+
+This chapter starts with a basic introduction to libraries and how you can use 
them in @ref{Review of library fundamentals}.
+The separate functions in the Gnuastro library are then introduced (classified 
by context) in @ref{Gnuastro library}.
+If you end up routinely using a fixed set of library functions, with a 
well-defined input and output, it will be much more beneficial if you define a 
program for the job.
+Therefore, in its @ref{Version controlled source}, Gnuastro comes with the 
@ref{The TEMPLATE program} to easily define your own programs(s).
 
 
 @menu
@@ -19138,33 +16978,20 @@ program} to easily define your own programs(s).
 @node Review of library fundamentals, BuildProgram, Library, Library
 @section Review of library fundamentals
 
-Gnuastro's libraries are written in the C programming language. In @ref{Why
-C}, we have thoroughly discussed the reasons behind this choice. C was
-actually created to write Unix, thus understanding the way C works can
-greatly help in effectively using programs and libraries in all Unix-like
-operating systems. Therefore, in the following subsections some important
-aspects of C, as it relates to libraries (and thus programs that depend on
-them) on Unix are reviewed. First we will discuss header files in
-@ref{Headers} and then go onto @ref{Linking}. This section finishes with
-@ref{Summary and example on libraries}. If you are already familiar with
-these concepts, please skip this section and go directly to @ref{Gnuastro
-library}.
+Gnuastro's libraries are written in the C programming language.
+In @ref{Why C}, we have thoroughly discussed the reasons behind this choice.
+C was actually created to write Unix, thus understanding the way C works can 
greatly help in effectively using programs and libraries in all Unix-like 
operating systems.
+Therefore, in the following subsections some important aspects of C, as it 
relates to libraries (and thus programs that depend on them) on Unix are 
reviewed.
+First we will discuss header files in @ref{Headers} and then go onto 
@ref{Linking}.
+This section finishes with @ref{Summary and example on libraries}.
+If you are already familiar with these concepts, please skip this section and 
go directly to @ref{Gnuastro library}.
 
 @cindex Modularity
-In theory, a full operating system (or any software) can be written as one
-function. Such a software would not need any headers or linking (that are
-discussed in the subsections below). However, writing that single function
-and maintaining it (adding new features, fixing bugs, documentation, etc)
-would be a programmer or scientist's worst nightmare! Furthermore, all
-the hard work that went into creating it cannot be reused in other
-software: every other programmer or scientist would have to re-invent the
-wheel. The ultimate purpose behind libraries (which come with headers and
-have to be linked) is to address this problem and increase modularity:
-``the degree to which a system's components may be separated and
-recombined'' (from Wikipedia). The more modular the source code of a
-program or library, the easier maintaining it will be, and all the hard
-work that went into creating it can be reused for a wider range of
-problems.
+In theory, a full operating system (or any software) can be written as one 
function.
+Such a software would not need any headers or linking (that are discussed in 
the subsections below).
+However, writing that single function and maintaining it (adding new features, 
fixing bugs, documentation, etc) would be a programmer or scientist's worst 
nightmare! Furthermore, all the hard work that went into creating it cannot be 
reused in other software: every other programmer or scientist would have to 
re-invent the wheel.
+The ultimate purpose behind libraries (which come with headers and have to be 
linked) is to address this problem and increase modularity: ``the degree to 
which a system's components may be separated and recombined'' (from Wikipedia).
+The more modular the source code of a program or library, the easier 
maintaining it will be, and all the hard work that went into creating it can be 
reused for a wider range of problems.
 
 @menu
 * Headers::                     Header files included in source.
@@ -19176,21 +17003,13 @@ problems.
 @subsection Headers
 
 @cindex Pre-Processor
-C source code is read from top to bottom in the source file, therefore
-program components (for example variables, data structures and functions)
-should all be @emph{defined} or @emph{declared} closer to the top of the
-source file: before they are used. @emph{Defining} something in C or C++ is
-jargon for providing its full details. @emph{Declaring} it, on the
-other-hand, is jargon for only providing the minimum information needed for
-the compiler to pass it temporarily and fill in the detailed definition
-later.
-
-For a function, the @emph{declaration} only contains the inputs and their
-data-types along with the output's type@footnote{Recall that in C,
-functions only have one output.}. The @emph{definition} adds to the
-declaration by including the exact details of what operations are done to
-the inputs to generate the output. As an example, take this simple
-summation function:
+C source code is read from top to bottom in the source file, therefore program 
components (for example variables, data structures and functions) should all be 
@emph{defined} or @emph{declared} closer to the top of the source file: before 
they are used.
+@emph{Defining} something in C or C++ is jargon for providing its full details.
+@emph{Declaring} it, on the other-hand, is jargon for only providing the 
minimum information needed for the compiler to pass it temporarily and fill in 
the detailed definition later.
+
+For a function, the @emph{declaration} only contains the inputs and their 
data-types along with the output's type@footnote{Recall that in C, functions 
only have one output.}.
+The @emph{definition} adds to the declaration by including the exact details 
of what operations are done to the inputs to generate the output.
+As an example, take this simple summation function:
 
 @example
 double
@@ -19200,13 +17019,10 @@ sum(double a, double b)
 @}
 @end example
 @noindent
-What you see above is the @emph{definition} of this function: it shows you
-(and the compiler) exactly what it does to the two @code{double} type
-inputs and that the output also has a @code{double} type. Note that a
-function's internal operations are rarely so simple and short, it can be
-arbitrarily long and complicated. This unreasonably short and simple
-function was chosen here for ease of reading. The declaration for this
-function is:
+What you see above is the @emph{definition} of this function: it shows you 
(and the compiler) exactly what it does to the two @code{double} type inputs 
and that the output also has a @code{double} type.
+Note that a function's internal operations are rarely so simple and short, it 
can be arbitrarily long and complicated.
+This unreasonably short and simple function was chosen here for ease of 
reading.
+The declaration for this function is:
 
 @example
 double
@@ -19214,160 +17030,90 @@ sum(double a, double b);
 @end example
 
 @noindent
-You can think of a function's declaration as a building's address in the
-city, and the definition as the building's complete blueprints. When the
-compiler confronts a call to a function during its processing, it doesn't
-need to know anything about how the inputs are processed to generate the
-output. Just as the postman doesn't need to know the inner structure of a
-building when delivering the mail. The declaration (address) is
-enough. Therefore by @emph{declaring} the functions once at the start of
-the source files, we don't have to worry about @emph{defining} them after
-they are used.
-
-Even for a simple real-world operation (not a simple summation like
-above!), you will soon need many functions (for example, some for
-reading/preparing the inputs, some for the processing, and some for
-preparing the output). Although it is technically possible, managing all
-the necessary functions in one file is not easy and is contrary to the
-modularity principle (see @ref{Review of library fundamentals}), for
-example the functions for preparing the input can be usable in your other
-projects with a different processing. Therefore, as we will see later (in
-@ref{Linking}), the functions don't necessarily need to be defined in the
-source file where they are used. As long as their definitions are
-ultimately linked to the final executable, everything will be fine. For
-now, it is just important to remember that the functions that are called
-within one source file must be declared within the source file
-(declarations are mandatory), but not necessarily defined there.
-
-In the spirit of modularity, it is common to define contextually similar
-functions in one source file. For example, in Gnuastro, functions that
-calculate the median, mean and other statistical functions are defined in
-@file{lib/statistics.c}, while functions that deal directly with FITS files
-are defined in @file{lib/fits.c}.
-
-Keeping the definition of similar functions in a separate file greatly
-helps their management and modularity, but this fact alone doesn't make
-things much easier for the caller's source code: recall that while
-definitions are optional, declarations are mandatory. So if this was all,
-the caller would have to manually copy and paste (@emph{include}) all the
-declarations from the various source files into the file they are working
-on now. To address this problem, programmers have adopted the header file
-convention: the header file of a source code contains all the declarations
-that a caller would need to be able to use any of its functions. For
-example, in Gnuastro, @file{lib/statistics.c} (file containing function
-definitions) comes with @file{lib/gnuastro/statistics.h} (only containing
-function declarations).
-
-The discussion above was mainly focused on functions, however, there are
-many more programming constructs such as pre-processor macros and data
-structures. Like functions, they also need to be known to the compiler when
-it confronts a call to them. So the header file also contains their
-definitions or declarations when they are necessary for the functions.
+You can think of a function's declaration as a building's address in the city, 
and the definition as the building's complete blueprints.
+When the compiler confronts a call to a function during its processing, it 
doesn't need to know anything about how the inputs are processed to generate 
the output.
+Just as the postman doesn't need to know the inner structure of a building 
when delivering the mail.
+The declaration (address) is enough.
+Therefore by @emph{declaring} the functions once at the start of the source 
files, we don't have to worry about @emph{defining} them after they are used.
+
+Even for a simple real-world operation (not a simple summation like above!), 
you will soon need many functions (for example, some for reading/preparing the 
inputs, some for the processing, and some for preparing the output).
+Although it is technically possible, managing all the necessary functions in 
one file is not easy and is contrary to the modularity principle (see 
@ref{Review of library fundamentals}), for example the functions for preparing 
the input can be usable in your other projects with a different processing.
+Therefore, as we will see later (in @ref{Linking}), the functions don't 
necessarily need to be defined in the source file where they are used.
+As long as their definitions are ultimately linked to the final executable, 
everything will be fine.
+For now, it is just important to remember that the functions that are called 
within one source file must be declared within the source file (declarations 
are mandatory), but not necessarily defined there.
+
+In the spirit of modularity, it is common to define contextually similar 
functions in one source file.
+For example, in Gnuastro, functions that calculate the median, mean and other 
statistical functions are defined in @file{lib/statistics.c}, while functions 
that deal directly with FITS files are defined in @file{lib/fits.c}.
+
+Keeping the definition of similar functions in a separate file greatly helps 
their management and modularity, but this fact alone doesn't make things much 
easier for the caller's source code: recall that while definitions are 
optional, declarations are mandatory.
+So if this was all, the caller would have to manually copy and paste 
(@emph{include}) all the declarations from the various source files into the 
file they are working on now.
+To address this problem, programmers have adopted the header file convention: 
the header file of a source code contains all the declarations that a caller 
would need to be able to use any of its functions.
+For example, in Gnuastro, @file{lib/statistics.c} (file containing function 
definitions) comes with @file{lib/gnuastro/statistics.h} (only containing 
function declarations).
+
+The discussion above was mainly focused on functions, however, there are many 
more programming constructs such as pre-processor macros and data structures.
+Like functions, they also need to be known to the compiler when it confronts a 
call to them.
+So the header file also contains their definitions or declarations when they 
are necessary for the functions.
 
 @cindex Macro
 @cindex Structures
 @cindex Data structures
 @cindex Pre-processor macros
-Pre-processor macros (or macros for short) are replaced with their defined
-value by the pre-processor before compilation. Conventionally they are
-written only in capital letters to be easily recognized. It is just
-important to understand that the compiler doesn't see the macros, it sees
-their fixed values. So when a header specifies macros you can do your
-programming without worrying about the actual values.  The standard C types
-(for example @code{int}, or @code{float}) are very low-level and basic. We
-can collect multiple C types into a @emph{structure} for a higher-level way
-to keep and pass-along data. See @ref{Generic data container} for some
-examples of macros and data structures.
-
-The contents in the header need to be @emph{include}d into the caller's
-source code with a special pre-processor command: @code{#include
-<path/to/header.h>}. As the name suggests, the @emph{pre-processor} goes
-through the source code prior to the processor (or compiler). One of its
-jobs is to include, or merge, the contents of files that are mentioned with
-this directive in the source code. Therefore the compiler sees a single
-entity containing the contents of the main file and all the included
-files. This allows you to include many (sometimes thousands of)
-declarations into your code with only one line. Since the headers are also
-installed with the library into your system, you don't even need to keep a
-copy of them for each separate program, making things even more convenient.
-
-Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory
-with a text editor to check out the include directives at the start of the
-file (after the copyright notice). Let's take @file{lib/fits.c} as an
-example. You will notice that Gnuastro's header files (like
-@file{gnuastro/fits.h}) are indeed within this directory (the @file{fits.h}
-file is in the @file{gnuastro/} directory). You will notice that files like
-@file{stdio.h}, or @file{string.h} are not in this directory (or anywhere
-within Gnuastro).
-
-On most systems the basic C header files (like @file{stdio.h} and
-@file{string.h} mentioned above) are located in
-@file{/usr/include/}@footnote{The @file{include/} directory name is taken
-from the pre-processor's @code{#include} directive, which is also the
-motivation behind the `I' in the @option{-I} option to the
-pre-processor.}. Your compiler is configured to automatically search that
-directory (and possibly others), so you don't have to explicitly mention
-these directories. Go ahead, look into the @file{/usr/include} directory
-and find @file{stdio.h} for example. When the necessary header files are
-not in those specific libraries, the pre-processor can also search in
-places other than the current directory. You can specify those directories
-with this pre-processor option@footnote{Try running Gnuastro's
-@command{make} and find the directories given to the compiler with the
-@option{-I} option.}:
+Pre-processor macros (or macros for short) are replaced with their defined 
value by the pre-processor before compilation.
+Conventionally they are written only in capital letters to be easily 
recognized.
+It is just important to understand that the compiler doesn't see the macros, 
it sees their fixed values.
+So when a header specifies macros you can do your programming without worrying 
about the actual values.
+The standard C types (for example @code{int}, or @code{float}) are very 
low-level and basic.
+We can collect multiple C types into a @emph{structure} for a higher-level way 
to keep and pass-along data.
+See @ref{Generic data container} for some examples of macros and data 
structures.
+
+The contents in the header need to be @emph{include}d into the caller's source 
code with a special pre-processor command: @code{#include <path/to/header.h>}.
+As the name suggests, the @emph{pre-processor} goes through the source code 
prior to the processor (or compiler).
+One of its jobs is to include, or merge, the contents of files that are 
mentioned with this directive in the source code.
+Therefore the compiler sees a single entity containing the contents of the 
main file and all the included files.
+This allows you to include many (sometimes thousands of) declarations into 
your code with only one line.
+Since the headers are also installed with the library into your system, you 
don't even need to keep a copy of them for each separate program, making things 
even more convenient.
+
+Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory 
with a text editor to check out the include directives at the start of the file 
(after the copyright notice).
+Let's take @file{lib/fits.c} as an example.
+You will notice that Gnuastro's header files (like @file{gnuastro/fits.h}) are 
indeed within this directory (the @file{fits.h} file is in the @file{gnuastro/} 
directory).
+You will notice that files like @file{stdio.h}, or @file{string.h} are not in 
this directory (or anywhere within Gnuastro).
+
+On most systems the basic C header files (like @file{stdio.h} and 
@file{string.h} mentioned above) are located in 
@file{/usr/include/}@footnote{The @file{include/} directory name is taken from 
the pre-processor's @code{#include} directive, which is also the motivation 
behind the `I' in the @option{-I} option to the pre-processor.}.
+Your compiler is configured to automatically search that directory (and 
possibly others), so you don't have to explicitly mention these directories.
+Go ahead, look into the @file{/usr/include} directory and find @file{stdio.h} 
for example.
+When the necessary header files are not in those specific libraries, the 
pre-processor can also search in places other than the current directory.
+You can specify those directories with this pre-processor option@footnote{Try 
running Gnuastro's @command{make} and find the directories given to the 
compiler with the @option{-I} option.}:
 
 @table @option
 @item -I DIR
-``Add the directory @file{DIR} to the list of directories to be searched
-for header files.  Directories named by '-I' are searched before the
-standard system include directories.  If the directory @file{DIR} is a
-standard system include directory, the option is ignored to ensure that the
-default search order for system directories and the special treatment of
-system headers are not defeated...'' (quoted from the GNU Compiler
-Collection manual). Note that the space between @key{I} and the directory
-is optional and commonly not used.
+``Add the directory @file{DIR} to the list of directories to be searched for 
header files.
+Directories named by '-I' are searched before the standard system include 
directories.
+If the directory @file{DIR} is a standard system include directory, the option 
is ignored to ensure that the default search order for system directories and 
the special treatment of system headers are not defeated...'' (quoted from the 
GNU Compiler Collection manual).
+Note that the space between @key{I} and the directory is optional and commonly 
not used.
 @end table
 
-If the pre-processor can't find the included files, it will abort with an
-error. In fact a common error when building programs that depend on a
-library is that the compiler doesn't not know where a library's header is
-(see @ref{Known issues}). So you have to manually tell the compiler where
-to look for the library's headers with the @option{-I} option. For a small
-software with one or two source files, this can be done manually (see
-@ref{Summary and example on libraries}). However, to enhance modularity,
-Gnuastro (and most other bin/libraries) contain many source files, so
-the compiler is invoked many times@footnote{Nearly every command you see
-being executed after running @command{make} is one call to the
-compiler.}. This makes manual addition or modification of this option
-practically impossible.
+If the pre-processor can't find the included files, it will abort with an 
error.
+In fact a common error when building programs that depend on a library is that 
the compiler doesn't not know where a library's header is (see @ref{Known 
issues}).
+So you have to manually tell the compiler where to look for the library's 
headers with the @option{-I} option.
+For a small software with one or two source files, this can be done manually 
(see @ref{Summary and example on libraries}).
+However, to enhance modularity, Gnuastro (and most other bin/libraries) 
contain many source files, so the compiler is invoked many 
times@footnote{Nearly every command you see being executed after running 
@command{make} is one call to the compiler.}.
+This makes manual addition or modification of this option practically 
impossible.
 
 @cindex GNU build system
 @cindex @command{CPPFLAGS}
-To solve this problem, in the GNU build system, there are conventional
-environment variables for the various kinds of compiler options (or flags).
-These environment variables are used in every call to the compiler (they
-can be empty). The environment variable used for the C Pre-Processor (or
-CPP) is @command{CPPFLAGS}. By giving @command{CPPFLAGS} a value once, you
-can be sure that each call to the compiler will be affected. See @ref{Known
-issues} for an example of how to set this variable at configure time.
+To solve this problem, in the GNU build system, there are conventional 
environment variables for the various kinds of compiler options (or flags).
+These environment variables are used in every call to the compiler (they can 
be empty).
+The environment variable used for the C Pre-Processor (or CPP) is 
@command{CPPFLAGS}.
+By giving @command{CPPFLAGS} a value once, you can be sure that each call to 
the compiler will be affected.
+See @ref{Known issues} for an example of how to set this variable at configure 
time.
 
 @cindex GNU build system
-As described in @ref{Installation directory}, you can select the top
-installation directory of a software using the GNU build system, when you
-@command{./configure} it. All the separate components will be put in their
-separate sub-directory under that, for example the programs, compiled
-libraries and library headers will go into @file{$prefix/bin} (replace
-@file{$prefix} with a directory), @file{$prefix/lib}, and
-@file{$prefix/include} respectively. For enhanced modularity, libraries
-that contain diverse collections of functions (like GSL, WCSLIB, and
-Gnuastro), put their header files in a sub-directory unique to
-themselves. For example all Gnuastro's header files are installed in
-@file{$prefix/include/gnuastro}. In your source code, you need to keep the
-library's sub-directory when including the headers from such libraries, for
-example @code{#include <gnuastro/fits.h>}@footnote{the top
-@file{$prefix/include} directory is usually known to the compiler}. Not all
-libraries need to follow this convention, for example CFITSIO only has one
-header (@file{fitsio.h}) which is directly installed in
-@file{$prefix/include}.
+As described in @ref{Installation directory}, you can select the top 
installation directory of a software using the GNU build system, when you 
@command{./configure} it.
+All the separate components will be put in their separate sub-directory under 
that, for example the programs, compiled libraries and library headers will go 
into @file{$prefix/bin} (replace @file{$prefix} with a directory), 
@file{$prefix/lib}, and @file{$prefix/include} respectively.
+For enhanced modularity, libraries that contain diverse collections of 
functions (like GSL, WCSLIB, and Gnuastro), put their header files in a 
sub-directory unique to themselves.
+For example all Gnuastro's header files are installed in 
@file{$prefix/include/gnuastro}.
+In your source code, you need to keep the library's sub-directory when 
including the headers from such libraries, for example @code{#include 
<gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually 
known to the compiler}.
+Not all libraries need to follow this convention, for example CFITSIO only has 
one header (@file{fitsio.h}) which is directly installed in 
@file{$prefix/include}.
 
 
 
@@ -19376,76 +17122,50 @@ header (@file{fitsio.h}) which is directly installed 
in
 @subsection Linking
 
 @cindex GNU Libtool
-To enhance modularity, similar functions are defined in one source file
-(with a @file{.c} suffix, see @ref{Headers} for more). After running
-@command{make}, each human-readable, @file{.c} file is translated (or
-compiled) into a computer-readable ``object'' file (ending with
-@file{.o}). Note that object files are also created when building programs,
-they aren't particular to libraries. Try opening Gnuastro's @file{lib/} and
-@file{bin/progname/} directories after running @command{make} to see these
-object files@footnote{Gnuastro uses GNU Libtool for portable library
-creation. Libtool will also make a @file{.lo} file for each @file{.c} file
-when building libraries (@file{.lo} files are
-human-readable).}. Afterwards, the object files are @emph{linked} together
-to create an executable program or a library.
+To enhance modularity, similar functions are defined in one source file (with 
a @file{.c} suffix, see @ref{Headers} for more).
+After running @command{make}, each human-readable, @file{.c} file is 
translated (or compiled) into a computer-readable ``object'' file (ending with 
@file{.o}).
+Note that object files are also created when building programs, they aren't 
particular to libraries.
+Try opening Gnuastro's @file{lib/} and @file{bin/progname/} directories after 
running @command{make} to see these object files@footnote{Gnuastro uses GNU 
Libtool for portable library creation.
+Libtool will also make a @file{.lo} file for each @file{.c} file when building 
libraries (@file{.lo} files are human-readable).}.
+Afterwards, the object files are @emph{linked} together to create an 
executable program or a library.
 
 @cindex GNU Binutils
-The object files contain the full definition of the functions in the
-respective @file{.c} file along with a list of any other function (or
-generally ``symbol'') that is referenced there. To get a list of those
-functions you can use the @command{nm} program which is part of GNU
-Binutils. For example from the top Gnuastro directory, run:
+The object files contain the full definition of the functions in the 
respective @file{.c} file along with a list of any other function (or generally 
``symbol'') that is referenced there.
+To get a list of those functions you can use the @command{nm} program which is 
part of GNU Binutils.
+For example from the top Gnuastro directory, run:
 
 @example
 $ nm bin/arithmetic/arithmetic.o
 @end example
 
 @noindent
-This will print a list of all the functions (more generally, `symbols')
-that were called within @file{bin/arithmetic/arithmetic.c} along with some
-further information (for example a @code{T} in the second column shows that
-this function is actually defined here, @code{U} says that it is undefined
-here). Try opening the @file{.c} file to check some of these functions for
-your self. Run @command{info nm} for more information.
+This will print a list of all the functions (more generally, `symbols') that 
were called within @file{bin/arithmetic/arithmetic.c} along with some further 
information (for example a @code{T} in the second column shows that this 
function is actually defined here, @code{U} says that it is undefined here).
+Try opening the @file{.c} file to check some of these functions for your self. 
Run @command{info nm} for more information.
 
 @cindex Linking
-To recap, the @emph{compiler} created the separate object files mentioned
-above for each @file{.c} file. The @emph{linker} will then combine all the
-symbols of the various object files (and libraries) into one program or
-library. In the case of Arithmetic (a program) the contents of the object
-files in @file{bin/arithmetic/} are copied (and re-ordered) into one final
-executable file which we can run from the operating system.
+To recap, the @emph{compiler} created the separate object files mentioned 
above for each @file{.c} file.
+The @emph{linker} will then combine all the symbols of the various object 
files (and libraries) into one program or library.
+In the case of Arithmetic (a program) the contents of the object files in 
@file{bin/arithmetic/} are copied (and re-ordered) into one final executable 
file which we can run from the operating system.
 
 @cindex Static linking
 @cindex Linking: Static
 @cindex Dynamic linking
 @cindex Linking: Dynamic
-There are two ways to @emph{link} all the necessary symbols: static and
-dynamic/shared. When the symbols (computer-readable function definitions in
-most cases) are copied into the output, it is called @emph{static}
-linking. When the symbols are kept in their original file and only a
-reference to them is kept in the executable, it is called @emph{dynamic},
-or @emph{shared} linking.
-
-Let's have a closer look at the executable to understand this better: we'll
-assume you have built Gnuastro without any customization and installed
-Gnuastro into the default @file{/usr/local/} directory (see
-@ref{Installation directory}). If you tried the @command{nm} command on one
-of Arithmetic's object files above, then with the command below you can
-confirm that all the functions that were defined in the object file above
-(had a @code{T} in the second column) are also defined in the
-@file{astarithmetic} executable:
+There are two ways to @emph{link} all the necessary symbols: static and 
dynamic/shared.
+When the symbols (computer-readable function definitions in most cases) are 
copied into the output, it is called @emph{static} linking.
+When the symbols are kept in their original file and only a reference to them 
is kept in the executable, it is called @emph{dynamic}, or @emph{shared} 
linking.
+
+Let's have a closer look at the executable to understand this better: we'll 
assume you have built Gnuastro without any customization and installed Gnuastro 
into the default @file{/usr/local/} directory (see @ref{Installation 
directory}).
+If you tried the @command{nm} command on one of Arithmetic's object files 
above, then with the command below you can confirm that all the functions that 
were defined in the object file above (had a @code{T} in the second column) are 
also defined in the @file{astarithmetic} executable:
 
 @example
 $ nm /usr/local/bin/astarithmetic
 @end example
 
 @noindent
-These symbols/function have been statically linked (copied) in the final
-executable. But you will notice that there are still many undefined symbols
-in the executable (those with a @code{U} in the second column). One class
-of such functions are Gnuastro's own library functions that start with
-`@code{gal_}':
+These symbols/function have been statically linked (copied) in the final 
executable.
+But you will notice that there are still many undefined symbols in the 
executable (those with a @code{U} in the second column).
+One class of such functions are Gnuastro's own library functions that start 
with `@code{gal_}':
 
 @example
 $ nm /usr/local/bin/astarithmetic | grep gal_
@@ -19457,68 +17177,47 @@ $ nm /usr/local/bin/astarithmetic | grep gal_
 @cindex Library: shared
 @cindex Dynamic linking
 @cindex Linking: dynamic
-These undefined symbols (functions) are present in another file and will be
-linked to the Arithmetic program every time you run it. Therefore they are
-known as dynamically @emph{linked} libraries @footnote{Do not confuse
-dynamically @emph{linked} libraries with dynamically @emph{loaded}
-libraries. The former (that is discussed here) are only loaded once at the
-program startup. However, the latter can be loaded anytime during the
-program's execution, they are also known as plugins.}. As we saw above,
-static linking is done when the executable is being built. However, when a
-program is dynamically linked to a library, at build-time, the library's
-symbols are only checked with the available libraries: they are not
-actually copied into the program's executable. Every time you run the
-program, the (dynamic) linker will be activated and will try to link the
-program to the installed library before the program starts.
-
-If you want all the libraries to be statically linked to the executables,
-you have to tell Libtool (which Gnuastro uses for the linking) to disable
-shared libraries at configure time@footnote{Libtool is very common and is
-commonly used. Therefore, you can use this option to configure on most
-programs using the GNU build system if you want static linking.}:
+These undefined symbols (functions) are present in another file and will be 
linked to the Arithmetic program every time you run it.
+Therefore they are known as dynamically @emph{linked} libraries @footnote{Do 
not confuse dynamically @emph{linked} libraries with dynamically @emph{loaded} 
libraries.
+The former (that is discussed here) are only loaded once at the program 
startup.
+However, the latter can be loaded anytime during the program's execution, they 
are also known as plugins.}.
+As we saw above, static linking is done when the executable is being built.
+However, when a program is dynamically linked to a library, at build-time, the 
library's symbols are only checked with the available libraries: they are not 
actually copied into the program's executable.
+Every time you run the program, the (dynamic) linker will be activated and 
will try to link the program to the installed library before the program starts.
+
+If you want all the libraries to be statically linked to the executables, you 
have to tell Libtool (which Gnuastro uses for the linking) to disable shared 
libraries at configure time@footnote{Libtool is very common and is commonly 
used.
+Therefore, you can use this option to configure on most programs using the GNU 
build system if you want static linking.}:
 
 @example
 $ configure --disable-shared
 @end example
 
 @noindent
-Try configuring Gnuastro with the command above, then build and install it
-(as described in @ref{Quick start}). Afterwards, check the @code{gal_}
-symbols in the installed Arithmetic executable like before. You will see
-that they are actually copied this time (have a @code{T} in the second
-column). If the second column doesn't convince you, look at the executable
-file size with the following command:
+Try configuring Gnuastro with the command above, then build and install it (as 
described in @ref{Quick start}).
+Afterwards, check the @code{gal_} symbols in the installed Arithmetic 
executable like before.
+You will see that they are actually copied this time (have a @code{T} in the 
second column).
+If the second column doesn't convince you, look at the executable file size 
with the following command:
 
 @example
 $ ls -lh /usr/local/bin/astarithmetic
 @end example
 
 @noindent
-It should be around 4.2 Megabytes with this static linking. If you
-configure and build Gnuastro again with shared libraries enabled (which is
-the default), you will notice that it is roughly 100 Kilobytes!
+It should be around 4.2 Megabytes with this static linking.
+If you configure and build Gnuastro again with shared libraries enabled (which 
is the default), you will notice that it is roughly 100 Kilobytes!
 
-This huge difference would have been very significant in the old days, but
-with the roughly Terabyte storage drives commonly in use today, it is
-negligible. Fortunately, output file size is not the only benefit of
-dynamic linking: since it links to the libraries at run-time (rather than
-build-time), you don't have to re-build a higher-level program or library
-when an update comes for one of the lower-level libraries it depends
-on. You just install the new low-level library and it will automatically be
-used/linked next time in the programs that use it. To be fair, this also
-creates a few complications@footnote{Both of these can be avoided by
-joining the mailing lists of the lower-level libraries and checking the
-changes in newer versions before installing them. Updates that result in
-such behaviors are generally heavily emphasized in the release notes.}:
+This huge difference would have been very significant in the old days, but 
with the roughly Terabyte storage drives commonly in use today, it is 
negligible.
+Fortunately, output file size is not the only benefit of dynamic linking: 
since it links to the libraries at run-time (rather than build-time), you don't 
have to re-build a higher-level program or library when an update comes for one 
of the lower-level libraries it depends on.
+You just install the new low-level library and it will automatically be 
used/linked next time in the programs that use it.
+To be fair, this also creates a few complications@footnote{Both of these can 
be avoided by joining the mailing lists of the lower-level libraries and 
checking the changes in newer versions before installing them.
+Updates that result in such behaviors are generally heavily emphasized in the 
release notes.}:
 
 @itemize
 @item
-Reproducibility: Even though your high-level tool has the same version as
-before, with the updated library, you might not get the same results.
+Reproducibility: Even though your high-level tool has the same version as 
before, with the updated library, you might not get the same results.
 @item
-Broken links: if some functions have been changed or removed in the updated
-library, then the linker will abort with an error at run-time. Therefore
-you need to re-build your higher-level program or library.
+Broken links: if some functions have been changed or removed in the updated 
library, then the linker will abort with an error at run-time.
+Therefore you need to re-build your higher-level program or library.
 @end itemize
 
 @cindex GNU C library
@@ -19531,177 +17230,118 @@ library, you might need another tool.} program, for 
example:
 $ ldd /usr/local/bin/astarithmetic
 @end example
 
-Library file names (in their installation directory) start with a
-@file{lib} and their ending (suffix) shows if they are static (@file{.a})
-or dynamic (@file{.so}), as described below. The name of the library is in
-the middle of these two, for example @file{libgsl.a} or
-@file{libgnuastro.a} (GSL and Gnuastro's static libraries), and
-@file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's
-shared library, the numbers may be different).
+Library file names (in their installation directory) start with a @file{lib} 
and their ending (suffix) shows if they are static (@file{.a}) or dynamic 
(@file{.so}), as described below.
+The name of the library is in the middle of these two, for example 
@file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), 
and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's 
shared library, the numbers may be different).
 
 @itemize
 @item
-A static library is known as an archive file and has the @file{.a}
-suffix. A static library is not an executable file.
+A static library is known as an archive file and has the @file{.a} suffix.
+A static library is not an executable file.
 
 @item
 @cindex Shared library versioning
 @cindex Versioning: Shared library
-A shared library ends with the @file{.so.X.Y.Z} suffix and is
-executable. The three numbers in the suffix, describe the version of the
-shared library. Shared library versions are defined to allow multiple
-versions of a shared library simultaneously on a system and to help detect
-possible updates in the library and programs that depend on it by the
-linker.
-
-It is very important to mention that this version number is different from
-the software version number (see @ref{Version numbering}), so do not
-confuse the two. See the ``Library interface versions'' chapter of GNU
-Libtool for more.
-
-For each shared library, we also have two symbolic links ending with
-@file{.so.X} and @file{.so}. They are automatically set by the installer,
-but you can change them (point them to another version of the library) when
-you have multiple versions of a library on your system.
+A shared library ends with the @file{.so.X.Y.Z} suffix and is executable.
+The three numbers in the suffix, describe the version of the shared library.
+Shared library versions are defined to allow multiple versions of a shared 
library simultaneously on a system and to help detect possible updates in the 
library and programs that depend on it by the linker.
+
+It is very important to mention that this version number is different from the 
software version number (see @ref{Version numbering}), so do not confuse the 
two.
+See the ``Library interface versions'' chapter of GNU Libtool for more.
+
+For each shared library, we also have two symbolic links ending with 
@file{.so.X} and @file{.so}.
+They are automatically set by the installer, but you can change them (point 
them to another version of the library) when you have multiple versions of a 
library on your system.
 
 @end itemize
 
 @cindex GNU Libtool
-Libraries that are built with GNU Libtool (including Gnuastro and its
-dependencies), build both static and dynamic libraries by default and
-install them in @file{prefix/lib/} directory (for more on @file{prefix},
-see @ref{Installation directory}). In this way, programs depending on the
-libraries can link with them however they prefer. See the contents of
-@file{/usr/local/lib} with the command below to see both the static and
-shared libraries available there, along with their executable nature and
-the symbolic links:
+Libraries that are built with GNU Libtool (including Gnuastro and its 
dependencies), build both static and dynamic libraries by default and install 
them in @file{prefix/lib/} directory (for more on @file{prefix}, see 
@ref{Installation directory}).
+In this way, programs depending on the libraries can link with them however 
they prefer.
+See the contents of @file{/usr/local/lib} with the command below to see both 
the static and shared libraries available there, along with their executable 
nature and the symbolic links:
 
 @example
 $ ls -l /usr/local/lib/
 @end example
 
-To link with a library, the linker needs to know where to find the
-library. @emph{At compilation time}, these locations can be passed to the
-linker with two separate options (see @ref{Summary and example on
-libraries} for an example) as described below. You can see these options
-and their usage in practice while building Gnuastro (after running
-@command{make}):
+To link with a library, the linker needs to know where to find the library.
+@emph{At compilation time}, these locations can be passed to the linker with 
two separate options (see @ref{Summary and example on libraries} for an 
example) as described below.
+You can see these options and their usage in practice while building Gnuastro 
(after running @command{make}):
 
 @table @option
 @item -L DIR
-Will tell the linker to look into @file{DIR} for the libraries. For example
-@file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}. You can
-make multiple calls to this option, so the linker looks into several
-directories at compilation time. Note that the space between @key{L} and
-the directory is optional and commonly ignored (written as @option{-LDIR}).
+Will tell the linker to look into @file{DIR} for the libraries.
+For example @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
+You can make multiple calls to this option, so the linker looks into several 
directories at compilation time.
+Note that the space between @key{L} and the directory is optional and commonly 
ignored (written as @option{-LDIR}).
 
 @item -lLIBRARY
-Specify the unique library identifier/name (not containing directory or
-shared/dynamic nature) to be linked with the executable. As discussed
-above, library file names have fixed parts which must not be given to this
-option. So @option{-lgsl} will guide the linker to either look for
-@file{libgsl.a} or @file{libgsl.so} (depending on the type of linking it is
-suppose to do). You can link many libraries by repeated calls to this
-option.
-
-@strong{Very important: } The place of this option on the compiler's
-command matters. This is often a source of confusion for beginners, so
-let's assume you have asked the linker to link with library A using this
-option. As soon as the linker confronts this option, it looks into the list
-of the undefined symbols it has found until that point and does a search in
-library A for any of those symbols. If any pending undefined symbol is
-found in library A, it is used. After the search in undefined symbols is
-complete, the contents of library A are completely discarded from the
-linker's memory. Therefore, if a later object file or library uses an
-unlinked symbol in library A, the linker will abort after it has finished
-its search in all the input libraries or object files.
-
-As an example, Gnuastro's @code{gal_fits_img_read} function depends on the
-@code{fits_read_pix} function of CFITSIO (specified with
-@option{-lcfitsio}, which in turn depends on the cURL library, called with
-@option{-lcurl}). So the proper way to link something that uses this
-function is @option{-lgnuastro -lcfitsio -lcurl}. If instead, you give:
-@option{-lcfitsio -lgnuastro} the linker will complain and abort. To avoid
-such linking complexities when using Gnuastro's library, we recommend using
-@ref{BuildProgram}.
+Specify the unique library identifier/name (not containing directory or 
shared/dynamic nature) to be linked with the executable.
+As discussed above, library file names have fixed parts which must not be 
given to this option.
+So @option{-lgsl} will guide the linker to either look for @file{libgsl.a} or 
@file{libgsl.so} (depending on the type of linking it is suppose to do).
+You can link many libraries by repeated calls to this option.
+
+@strong{Very important: } The place of this option on the compiler's command 
matters.
+This is often a source of confusion for beginners, so let's assume you have 
asked the linker to link with library A using this option.
+As soon as the linker confronts this option, it looks into the list of the 
undefined symbols it has found until that point and does a search in library A 
for any of those symbols.
+If any pending undefined symbol is found in library A, it is used.
+After the search in undefined symbols is complete, the contents of library A 
are completely discarded from the linker's memory.
+Therefore, if a later object file or library uses an unlinked symbol in 
library A, the linker will abort after it has finished its search in all the 
input libraries or object files.
+
+As an example, Gnuastro's @code{gal_fits_img_read} function depends on the 
@code{fits_read_pix} function of CFITSIO (specified with @option{-lcfitsio}, 
which in turn depends on the cURL library, called with @option{-lcurl}).
+So the proper way to link something that uses this function is 
@option{-lgnuastro -lcfitsio -lcurl}.
+If instead, you give: @option{-lcfitsio -lgnuastro} the linker will complain 
and abort.
+To avoid such linking complexities when using Gnuastro's library, we recommend 
using @ref{BuildProgram}.
 
 @end table
 
-If you have compiled and linked your program with a dynamic library, then
-the dynamic linker also needs to know the location of the libraries after
-building the program: @emph{every time} the program is run
-afterwards. Therefore, it may happen that you don't get any errors when
-compiling/linking a program, but are unable to run your program because of
-a failure to find a library. This happens because the dynamic linker hasn't
-found the dynamic library @emph{at run time}.
+If you have compiled and linked your program with a dynamic library, then the 
dynamic linker also needs to know the location of the libraries after building 
the program: @emph{every time} the program is run afterwards.
+Therefore, it may happen that you don't get any errors when compiling/linking 
a program, but are unable to run your program because of a failure to find a 
library.
+This happens because the dynamic linker hasn't found the dynamic library 
@emph{at run time}.
 
-To find the dynamic libraries at run-time, the linker looks into the paths,
-or directories, in the @code{LD_LIBRARY_PATH} environment variable. For a
-discussion on environment variables, especially search paths like
-@code{LD_LIBRARY_PATH}, and how you can add new directories to them, see
-@ref{Installation directory}.
+To find the dynamic libraries at run-time, the linker looks into the paths, or 
directories, in the @code{LD_LIBRARY_PATH} environment variable.
+For a discussion on environment variables, especially search paths like 
@code{LD_LIBRARY_PATH}, and how you can add new directories to them, see 
@ref{Installation directory}.
 
 
 
 @node Summary and example on libraries,  , Linking, Review of library 
fundamentals
 @subsection Summary and example on libraries
 
-After the mostly abstract discussions of @ref{Headers} and @ref{Linking},
-we'll give a small tutorial here. But before that, let's recall the general
-steps of how your source code is prepared, compiled and linked to the
-libraries it depends on so you can run it:
+After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, 
we'll give a small tutorial here.
+But before that, let's recall the general steps of how your source code is 
prepared, compiled and linked to the libraries it depends on so you can run it:
 
 @enumerate
 @item
-The @strong{pre-processor} includes the header (@file{.h}) files into the
-function definition (@file{.c}) files, expands pre-processor macros and
-generally prepares the human-readable source for compilation (reviewed in
-@ref{Headers}).
+The @strong{pre-processor} includes the header (@file{.h}) files into the 
function definition (@file{.c}) files, expands pre-processor macros.
+Generally the pre-processor prepares the human-readable source for compilation 
(reviewed in @ref{Headers}).
 
 @item
-The @strong{compiler} will translate (compile) the human-readable contents
-of each source (merged @file{.c} and the @file{.h} files, or generally the
-output of the pre-processor) into the computer-readable code of @file{.o}
-files.
+The @strong{compiler} will translate (compile) the human-readable contents of 
each source (merged @file{.c} and the @file{.h} files, or generally the output 
of the pre-processor) into the computer-readable code of @file{.o} files.
 
 @item
-The @strong{linker} will link the called function definitions from various
-compiled files to create one unified object. When the unified product has a
-@code{main} function, this function is the product's only entry point,
-enabling the operating system or user to directly interact with it, so the
-product is a program. When the product doesn't have a @code{main} function,
-the linker's product is a library and its exported functions can be linked
-to other executables (it has many entry points).
+The @strong{linker} will link the called function definitions from various 
compiled files to create one unified object.
+When the unified product has a @code{main} function, this function is the 
product's only entry point, enabling the operating system or user to directly 
interact with it, so the product is a program.
+When the product doesn't have a @code{main} function, the linker's product is 
a library and its exported functions can be linked to other executables (it has 
many entry points).
 @end enumerate
 
 @cindex GCC
 @cindex GNU Compiler Collection
-The GNU Compiler Collection (or GCC for short) will do all three steps. So
-as a first example, from Gnuastro's source, go to @file{tests/lib/}. This
-directory contains the library tests, you can use these as some simple
-tutorials. For this demonstration, we will compile and run the
-@file{arraymanip.c}. This small program will call Gnuastro library for some
-simple operations on an array (open it and have a look). To compile this
-program, run this command inside the directory containing it.
+The GNU Compiler Collection (or GCC for short) will do all three steps.
+So as a first example, from Gnuastro's source, go to @file{tests/lib/}.
+This directory contains the library tests, you can use these as some simple 
tutorials.
+For this demonstration, we will compile and run the @file{arraymanip.c}.
+This small program will call Gnuastro library for some simple operations on an 
array (open it and have a look).
+To compile this program, run this command inside the directory containing it.
 
 @example
 $ gcc arraymanip.c -lgnuastro -lm -o arraymanip
 @end example
 
 @noindent
-The two @option{-lgnuastro} and @option{-lm} options (in this order) tell
-GCC to first link with the Gnuastro library and then with C's math
-library. The @option{-o} option is used to specify the name of the output
-executable, without it the output file name will be @file{a.out} (on most
-OSs), independent of your input file name(s).
+The two @option{-lgnuastro} and @option{-lm} options (in this order) tell GCC 
to first link with the Gnuastro library and then with C's math library.
+The @option{-o} option is used to specify the name of the output executable, 
without it the output file name will be @file{a.out} (on most OSs), independent 
of your input file name(s).
 
-If your top Gnuastro installation directory (let's call it @file{$prefix},
-see @ref{Installation directory}) is not recognized by GCC, you will get
-pre-processor errors for unknown header files. Once you fix it, you will
-get linker errors for undefined functions. To fix both, you should run GCC
-as follows: additionally telling it which directories it can find
-Gnuastro's headers and compiled library (see @ref{Headers} and
-@ref{Linking}):
+If your top Gnuastro installation directory (let's call it @file{$prefix}, see 
@ref{Installation directory}) is not recognized by GCC, you will get 
pre-processor errors for unknown header files.
+Once you fix it, you will get linker errors for undefined functions.
+To fix both, you should run GCC as follows: additionally telling it which 
directories it can find Gnuastro's headers and compiled library (see 
@ref{Headers} and @ref{Linking}):
 
 @example
 $ gcc -I$prefix/include -L$prefix/lib arraymanip.c -lgnuastro -lm     \
@@ -19709,53 +17349,35 @@ $ gcc -I$prefix/include -L$prefix/lib arraymanip.c 
-lgnuastro -lm     \
 @end example
 
 @noindent
-This single command has done all the pre-processor, compilation and linker
-operations. Therefore no intermediate files (object files in particular)
-were created, only a single output executable was created. You are now
-ready to run the program with:
+This single command has done all the pre-processor, compilation and linker 
operations.
+Therefore no intermediate files (object files in particular) were created, 
only a single output executable was created.
+You are now ready to run the program with:
 
 @example
 $ ./arraymanip
 @end example
 
-The Gnuastro functions called by this program only needed to be linked with
-the C math library. But if your program needs WCS coordinate
-transformations, needs to read a FITS file, needs special math operations
-(which include its linear algebra operations), or you want it to run on
-multiple CPU threads, you also need to add these libraries in the call to
-GCC: @option{-lgnuastro -lwcs -lcfitsio -lgsl -lgslcblas -pthread -lm}. In
-@ref{Gnuastro library}, where each function is documented, it is mentioned
-which libraries (if any) must also be linked when you call a function. If
-you feel all these linkings can be confusing, please consider Gnuastro's
-@ref{BuildProgram} program.
+The Gnuastro functions called by this program only needed to be linked with 
the C math library.
+But if your program needs WCS coordinate transformations, needs to read a FITS 
file, needs special math operations (which include its linear algebra 
operations), or you want it to run on multiple CPU threads, you also need to 
add these libraries in the call to GCC: @option{-lgnuastro -lwcs -lcfitsio 
-lgsl -lgslcblas -pthread -lm}.
+In @ref{Gnuastro library}, where each function is documented, it is mentioned 
which libraries (if any) must also be linked when you call a function.
+If you feel all these linkings can be confusing, please consider Gnuastro's 
@ref{BuildProgram} program.
 
 
 @node BuildProgram, Gnuastro library, Review of library fundamentals, Library
 @section BuildProgram
-The number and order of libraries that are necessary for linking a program
-with Gnuastro library might be too confusing when you need to compile a
-small program for one particular job (with one source file). BuildProgram
-will use the information gathered during configuring Gnuastro and link with
-all the appropriate libraries on your system. This will allow you to easily
-compile, link and run programs that use Gnuastro's library with one simple
-command and not worry about which libraries to link to, or the linking
-order.
+The number and order of libraries that are necessary for linking a program 
with Gnuastro library might be too confusing when you need to compile a small 
program for one particular job (with one source file).
+BuildProgram will use the information gathered during configuring Gnuastro and 
link with all the appropriate libraries on your system.
+This will allow you to easily compile, link and run programs that use 
Gnuastro's library with one simple command and not worry about which libraries 
to link to, or the linking order.
 
 @cindex GNU Libtool
-BuildProgram uses GNU Libtool to find the necessary libraries to link
-against (GNU Libtool is the same program that builds all of Gnuastro's
-libraries and programs when you run @code{make}). So in the future, if
-Gnuastro's prerequisite libraries change or other libraries are added, you
-don't have to worry, you can just run BuildProgram and internal linking
-will be done correctly.
+BuildProgram uses GNU Libtool to find the necessary libraries to link against 
(GNU Libtool is the same program that builds all of Gnuastro's libraries and 
programs when you run @code{make}).
+So in the future, if Gnuastro's prerequisite libraries change or other 
libraries are added, you don't have to worry, you can just run BuildProgram and 
internal linking will be done correctly.
 
 @cartouche
 @noindent
-@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU
-Libtool, other implementations don't have some necessary features. If GNU
-Libtool isn't available at Gnuastro's configure time, you will get a notice
-at the end of the configuration step and BuildProgram will not be built or
-installed. Please see @ref{Optional dependencies} for more information.
+@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU 
Libtool, other implementations don't have some necessary features.
+If GNU Libtool isn't available at Gnuastro's configure time, you will get a 
notice at the end of the configuration step and BuildProgram will not be built 
or installed.
+Please see @ref{Optional dependencies} for more information.
 @end cartouche
 
 @menu
@@ -19765,10 +17387,8 @@ installed. Please see @ref{Optional dependencies} for 
more information.
 @node Invoking astbuildprog,  , BuildProgram, BuildProgram
 @subsection Invoking BuildProgram
 
-BuildProgram will compile and link a C source program with Gnuastro's
-library and all its dependencies, greatly facilitating the compilation and
-running of small programs that use Gnuastro's library. The executable name
-is @file{astbuildprog} with the following general template:
+BuildProgram will compile and link a C source program with Gnuastro's library 
and all its dependencies, greatly facilitating the compilation and running of 
small programs that use Gnuastro's library.
+The executable name is @file{astbuildprog} with the following general template:
 
 @example
 $ astbuildprog [OPTION...] C_SOURCE_FILE
@@ -19796,30 +17416,21 @@ $ astbuildprog -Lother -Iother/dir myprogram.c
 $ astbuildprog --onlybuild myprogram.c
 @end example
 
-If BuildProgram is to run, it needs a C programming language source file as
-input. By default it will compile and link the program to build the a final
-executable file and run it. The built executable name can be set with the
-optional @option{--output} option. When no output name is set, BuildProgram
-will use Gnuastro's @ref{Automatic output}, and remove the suffix of the
-input and use that as the output name. For the full list of options that
-BuildProgram shares with other Gnuastro programs, see @ref{Common
-options}. You may also use Gnuastro's @ref{Configuration files} to specify
-other libraries/headers to use for special directories and not have to type
-them in every time.
-
-The first argument is considered to be the C source file that must be
-compiled and linked. Any other arguments (non-option tokens on the
-command-line) will be passed onto the program when BuildProgram wants to
-run it. Recall that by default BuildProgram will run the program after
-building it. This behavior can be disabled with the @code{--onlybuild}
-option.
+If BuildProgram is to run, it needs a C programming language source file as 
input.
+By default it will compile and link the program to build the a final 
executable file and run it.
+The built executable name can be set with the optional @option{--output} 
option.
+When no output name is set, BuildProgram will use Gnuastro's @ref{Automatic 
output}, and remove the suffix of the input and use that as the output name.
+For the full list of options that BuildProgram shares with other Gnuastro 
programs, see @ref{Common options}.
+You may also use Gnuastro's @ref{Configuration files} to specify other 
libraries/headers to use for special directories and not have to type them in 
every time.
+
+The first argument is considered to be the C source file that must be compiled 
and linked.
+Any other arguments (non-option tokens on the command-line) will be passed 
onto the program when BuildProgram wants to run it.
+Recall that by default BuildProgram will run the program after building it.
+This behavior can be disabled with the @code{--onlybuild} option.
 
 @cindex GNU Make
-When the @option{--quiet} option (see @ref{Operating mode options}) is not
-called, BuildPrograms will print the compilation and running commands. Once
-your program grows and you break it up into multiple files (which are much
-more easily managed with Make), you can use the linking flags of the
-non-quiet output in your @code{Makefile}.
+When the @option{--quiet} option (see @ref{Operating mode options}) is not 
called, BuildPrograms will print the compilation and running commands.
+Once your program grows and you break it up into multiple files (which are 
much more easily managed with Make), you can use the linking flags of the 
non-quiet output in your @code{Makefile}.
 
 @table @option
 @item -I STR
@@ -19827,112 +17438,86 @@ non-quiet output in your @code{Makefile}.
 @cindex GNU CPP
 @cindex C Pre-Processor
 Directory to search for files that you @code{#include} in your C program.
-Note that headers relating to Gnuastro and its dependencies don't need this
-option. This is only necessary if you want to use other headers. It may be
-called multiple times and order matters. This directory will be searched
-before those of Gnuastro's build and also the system search
-directories. See @ref{Headers} for a thorough introduction.
-
-From the GNU C Pre-Processor manual: ``Add the directory @code{STR} to the
-list of directories to be searched for header files. Directories named by
-@option{-I} are searched before the standard system include directories.
-If the directory @code{STR} is a standard system include directory, the
-option is ignored to ensure that the default search order for system
-directories and the special treatment of system headers are not defeated''.
+Note that headers relating to Gnuastro and its dependencies don't need this 
option.
+This is only necessary if you want to use other headers.
+It may be called multiple times and order matters.
+This directory will be searched before those of Gnuastro's build and also the 
system search directories.
+See @ref{Headers} for a thorough introduction.
+
+From the GNU C Pre-Processor manual: ``Add the directory @code{STR} to the 
list of directories to be searched for header files.
+Directories named by @option{-I} are searched before the standard system 
include directories.
+If the directory @code{STR} is a standard system include directory, the option 
is ignored to ensure that the default search order for system directories and 
the special treatment of system headers are not defeated''.
 
 @item -L STR
 @itemx --linkdir=STR
 @cindex GNU Libtool
-Directory to search for compiled libraries to link the program with. Note
-that all the directories that Gnuastro was built with will already be used
-by BuildProgram (GNU Libtool). This option is only necessary if your
-libraries are in other directories. Multiple calls to this option are
-possible and order matters. This directory will be searched before those of
-Gnuastro's build and also the system search directories. See @ref{Linking}
-for a thorough introduction.
+Directory to search for compiled libraries to link the program with.
+Note that all the directories that Gnuastro was built with will already be 
used by BuildProgram (GNU Libtool).
+This option is only necessary if your libraries are in other directories.
+Multiple calls to this option are possible and order matters.
+This directory will be searched before those of Gnuastro's build and also the 
system search directories.
+See @ref{Linking} for a thorough introduction.
 
 @item -l STR
 @itemx --linklib=STR
-Library to link with your program. Note that all the libraries that
-Gnuastro was built with will already be linked by BuildProgram (GNU
-Libtool). This option is only necessary if you want to link with other
-directories. Multiple calls to this option are possible and order
-matters. This library will be linked before Gnuastro's library or its
-dependencies. See @ref{Linking} for a thorough introduction.
+Library to link with your program.
+Note that all the libraries that Gnuastro was built with will already be 
linked by BuildProgram (GNU Libtool).
+This option is only necessary if you want to link with other directories.
+Multiple calls to this option are possible and order matters.
+This library will be linked before Gnuastro's library or its dependencies.
+See @ref{Linking} for a thorough introduction.
 
 @item -O INT/STR
 @itemx --optimize=INT/STR
 @cindex Optimization
 @cindex GNU Compiler Collection
-Compiler optimization level: 0 (for no optimization, good debugging), 1, 2,
-3 (for the highest level of optimizations). From the GNU Compiler
-Collection (GCC) manual: ``Without any optimization option, the compiler's
-goal is to reduce the cost of compilation and to make debugging produce the
-expected results.  Statements are independent: if you stop the program with
-a break point between statements, you can then assign a new value to any
-variable or change the program counter to any other statement in the
-function and get exactly the results you expect from the source
-code. Turning on optimization flags makes the compiler attempt to improve
-the performance and/or code size at the expense of compilation time and
-possibly the ability to debug the program.'' Please see your compiler's
-manual for the full list of acceptable values to this option.
+Compiler optimization level: 0 (for no optimization, good debugging), 1, 2, 3 
(for the highest level of optimizations).
+From the GNU Compiler Collection (GCC) manual: ``Without any optimization 
option, the compiler's goal is to reduce the cost of compilation and to make 
debugging produce the expected results.
+Statements are independent: if you stop the program with a break point between 
statements, you can then assign a new value to any variable or change the 
program counter to any other statement in the function and get exactly the 
results you expect from the source code.
+Turning on optimization flags makes the compiler attempt to improve the 
performance and/or code size at the expense of compilation time and possibly 
the ability to debug the program.'' Please see your compiler's manual for the 
full list of acceptable values to this option.
 
 @item -g
 @itemx --debug
 @cindex Debug
-Emit extra information in the compiled binary for use by a debugger. When
-calling this option, it is best to explicitly disable optimization with
-@option{-O0}. To combine both options you can run @option{-gO0} (see
-@ref{Options} for how short options can be merged into one).
+Emit extra information in the compiled binary for use by a debugger.
+When calling this option, it is best to explicitly disable optimization with 
@option{-O0}.
+To combine both options you can run @option{-gO0} (see @ref{Options} for how 
short options can be merged into one).
 
 @item -W STR
 @itemx --warning=STR
-Print compiler warnings on command-line during compilation. ``Warnings are
-diagnostic messages that report constructions that are not inherently
-erroneous but that are risky or suggest there may have been an error.''
-(from the GCC manual). It is always recommended to compile your programs
-with warnings enabled.
-
-All compiler warning options that start with @option{W} are usable by this
-option in BuildProgram also, see your compiler's manual for the full
-list. Some of the most common values to this option are: @option{pedantic}
-(Warnings related to standard C) and @option{all} (all issues the compiler
-confronts).
+Print compiler warnings on command-line during compilation.
+``Warnings are diagnostic messages that report constructions that are not 
inherently erroneous but that are risky or suggest there may have been an 
error.'' (from the GCC manual).
+It is always recommended to compile your programs with warnings enabled.
+
+All compiler warning options that start with @option{W} are usable by this 
option in BuildProgram also, see your compiler's manual for the full list.
+Some of the most common values to this option are: @option{pedantic} (Warnings 
related to standard C) and @option{all} (all issues the compiler confronts).
 
 @item -t
 @itemx --tag=STR
-The language configuration information. Libtool can build objects and
-libraries in many languages. In many cases, it can identify the language
-automatically, but when it doesn't you can use this option to explicitly
-notify Libtool of the language. The acceptable values are: @code{CC} for C,
-@code{CXX} for C++, @code{GCJ} for Java, @code{F77} for Fortran 77,
-@code{FC} for Fortran, @code{GO} for Go and @code{RC} for Windows
-Resource. Note that the Gnuastro library is not yet fully compatible with
-all these languages.
+The language configuration information.
+Libtool can build objects and libraries in many languages.
+In many cases, it can identify the language automatically, but when it doesn't 
you can use this option to explicitly notify Libtool of the language.
+The acceptable values are: @code{CC} for C, @code{CXX} for C++, @code{GCJ} for 
Java, @code{F77} for Fortran 77, @code{FC} for Fortran, @code{GO} for Go and 
@code{RC} for Windows Resource.
+Note that the Gnuastro library is not yet fully compatible with all these 
languages.
 
 @item -b
 @itemx --onlybuild
-Only build the program, don't run it. By default, the built program is
-immediately run afterwards.
+Only build the program, don't run it.
+By default, the built program is immediately run afterwards.
 
 @item -d
 @itemx --deletecompiled
-Delete the compiled binary file after running it. This option is only
-relevant when the compiled program is run after being built. In other
-words, it is only relevant when @option{--onlybuild} is not called. It can
-be useful when you are busy testing a program or just want a fast result
-and the actual binary/compiled file is not of later use.
+Delete the compiled binary file after running it.
+This option is only relevant when the compiled program is run after being 
built.
+In other words, it is only relevant when @option{--onlybuild} is not called.
+It can be useful when you are busy testing a program or just want a fast 
result and the actual binary/compiled file is not of later use.
 
 @item -a STR
 @itemx --la=STR
-Use the given @file{.la} file (Libtool control file) instead of the one
-that was produced from Gnuastro's configuration results. The Libtool
-control file keeps all the necessary information for building and linking a
-program with a library built by Libtool. The default
-@file{prefix/lib/libgnuastro.la} keeps all the information necessary to
-build a program using the Gnuastro library gathered during configure time
-(see @ref{Installation directory} for prefix). This option is useful when
-you prefer to use another Libtool control file.
+Use the given @file{.la} file (Libtool control file) instead of the one that 
was produced from Gnuastro's configuration results.
+The Libtool control file keeps all the necessary information for building and 
linking a program with a library built by Libtool.
+The default @file{prefix/lib/libgnuastro.la} keeps all the information 
necessary to build a program using the Gnuastro library gathered during 
configure time (see @ref{Installation directory} for prefix).
+This option is useful when you prefer to use another Libtool control file.
 @end table
 
 
@@ -19942,55 +17527,38 @@ you prefer to use another Libtool control file.
 @node Gnuastro library, Library demo programs, BuildProgram, Library
 @section Gnuastro library
 
-Gnuastro library's programming constructs (function declarations, macros,
-data structures, or global variables) are classified by context into
-multiple header files (see @ref{Headers})@footnote{Within Gnuastro's
-source, all installed @file{.h} files in @file{lib/gnuastro/} are
-accompanied by a @file{.c} file in @file{/lib/}.}. In this section, the
-functions in each header will be discussed under a separate sub-section,
-which includes the name of the header. Assuming a function declaration is
-in @file{headername.h}, you can include its declaration in your source code
-with:
+Gnuastro library's programming constructs (function declarations, macros, data 
structures, or global variables) are classified by context into multiple header 
files (see @ref{Headers})@footnote{Within Gnuastro's source, all installed 
@file{.h} files in @file{lib/gnuastro/} are accompanied by a @file{.c} file in 
@file{/lib/}.}.
+In this section, the functions in each header will be discussed under a 
separate sub-section, which includes the name of the header.
+Assuming a function declaration is in @file{headername.h}, you can include its 
declaration in your source code with:
 
 @example
 # include <gnuastro/headername.h>
 @end example
 
 @noindent
-The names of all constructs in @file{headername.h} are prefixed with
-@code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros). The
-@code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
+The names of all constructs in @file{headername.h} are prefixed with 
@code{gal_headername_} (or @code{GAL_HEADERNAME_} for macros).
+The @code{gal_} prefix stands for @emph{G}NU @emph{A}stronomy @emph{L}ibrary.
 
-Gnuastro library functions are compiled into a single file which can be
-linked on the command-line with the @option{-lgnuastro} option. See
-@ref{Linking} and @ref{Summary and example on libraries} for an
-introduction on linking and some fully working examples of the
-libraries.
+Gnuastro library functions are compiled into a single file which can be linked 
on the command-line with the @option{-lgnuastro} option.
+See @ref{Linking} and @ref{Summary and example on libraries} for an 
introduction on linking and some fully working examples of the libraries.
 
-Gnuastro's library is a high-level library which depends on lower level
-libraries for some operations (see @ref{Dependencies}). Therefore if at
-least one of Gnuastro's functions in your program use functions from the
-dependencies, you will also need to link those dependencies after linking
-with Gnuastro. See @ref{BuildProgram} for a convenient way to deal with the
-dependencies. BuildProgram will take care of the libraries to link with
-your program (which uses the Gnuastro library), and can even run the built
-program afterwards. Therefore it allows you to conveniently focus on your
-exciting science/research when using Gnuastro's libraries.
+Gnuastro's library is a high-level library which depends on lower level 
libraries for some operations (see @ref{Dependencies}).
+Therefore if at least one of Gnuastro's functions in your program use 
functions from the dependencies, you will also need to link those dependencies 
after linking with Gnuastro.
+See @ref{BuildProgram} for a convenient way to deal with the dependencies.
+BuildProgram will take care of the libraries to link with your program (which 
uses the Gnuastro library), and can even run the built program afterwards.
+Therefore it allows you to conveniently focus on your exciting 
science/research when using Gnuastro's libraries.
 
 
 
 @cartouche
 @noindent
-@strong{Libraries are still under heavy development: } Gnuastro was
-initially created to be a collection of command-line programs. However, as
-the programs and their the shared functions grew, internal (not installed)
-libraries were added. Since the 0.2 release, the libraries are
-install-able. Hence the libraries are currently under heavy development and
-will significantly evolve between releases and will become more mature and
-stable in due time. It will stabilize with the removal of this
-notice. Check the @file{NEWS} file for interface changes. If you use the
-Info version of this manual (see @ref{Info}), you don't have to worry: the
-documentation will correspond to your installed version.
+@strong{Libraries are still under heavy development: } Gnuastro was initially 
created to be a collection of command-line programs.
+However, as the programs and their the shared functions grew, internal (not 
installed) libraries were added.
+Since the 0.2 release, the libraries are install-able.
+Hence the libraries are currently under heavy development and will 
significantly evolve between releases and will become more mature and stable in 
due time.
+It will stabilize with the removal of this notice.
+Check the @file{NEWS} file for interface changes.
+If you use the Info version of this manual (see @ref{Info}), you don't have to 
worry: the documentation will correspond to your installed version.
 @end cartouche
 
 @menu
@@ -20027,19 +17595,13 @@ documentation will correspond to your installed 
version.
 @node Configuration information, Multithreaded programming, Gnuastro library, 
Gnuastro library
 @subsection Configuration information (@file{config.h})
 
-The @file{gnuastro/config.h} header contains information about the full
-Gnuastro installation on your system. Gnuastro developers should note that
-this is the only header that is not available within Gnuastro, it is only
-available to a Gnuastro library user @emph{after} installation. Within
-Gnuastro, @file{config.h} (which is included in every Gnuastro @file{.c}
-file, see @ref{Coding conventions}) has more than enough information about
-the overall Gnuastro installation.
+The @file{gnuastro/config.h} header contains information about the full 
Gnuastro installation on your system.
+Gnuastro developers should note that this is the only header that is not 
available within Gnuastro, it is only available to a Gnuastro library user 
@emph{after} installation.
+Within Gnuastro, @file{config.h} (which is included in every Gnuastro 
@file{.c} file, see @ref{Coding conventions}) has more than enough information 
about the overall Gnuastro installation.
 
 @deffn Macro GAL_CONFIG_VERSION
-This macro can be used as a string
-literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}}
-containing the version of Gnuastro that is being used. See @ref{Version
-numbering} for the version formats. For example:
+This macro can be used as a string 
literal@footnote{@url{https://en.wikipedia.org/wiki/String_literal}} containing 
the version of Gnuastro that is being used.
+See @ref{Version numbering} for the version formats. For example:
 
 @example
 printf("Gnuastro version: %s\n", GAL_CONFIG_VERSION);
@@ -20055,38 +17617,31 @@ char *gnuastro_version=GAL_CONFIG_VERSION;
 
 
 @deffn Macro GAL_CONFIG_HAVE_LIBGIT2
-Libgit2 is an optional dependency of Gnuastro (see @ref{Optional
-dependencies}). When it is installed and detected at configure time, this
-macro will have a value of @code{1} (one). Otherwise, it will have a value
-of @code{0} (zero). Gnuastro also comes with some wrappers to make it
-easier to use libgit2 (see @ref{Git wrappers}).
+Libgit2 is an optional dependency of Gnuastro (see @ref{Optional 
dependencies}).
+When it is installed and detected at configure time, this macro will have a 
value of @code{1} (one).
+Otherwise, it will have a value of @code{0} (zero).
+Gnuastro also comes with some wrappers to make it easier to use libgit2 (see 
@ref{Git wrappers}).
 @end deffn
 
 @deffn Macro GAL_CONFIG_HAVE_FITS_IS_REENTRANT
 @cindex CFITSIO
-This macro will have a value of 1 when the CFITSIO of the host system has
-the @code{fits_is_reentrant} function (available from CFITSIO version
-3.30). This function is used to see if CFITSIO was configured to read a
-FITS file simultaneously on different threads.
+This macro will have a value of 1 when the CFITSIO of the host system has the 
@code{fits_is_reentrant} function (available from CFITSIO version 3.30).
+This function is used to see if CFITSIO was configured to read a FITS file 
simultaneously on different threads.
 @end deffn
 
 
 @deffn Macro GAL_CONFIG_HAVE_WCSLIB_VERSION
-WCSLIB is the reference library for world coordinate system transformation
-(see @ref{WCSLIB} and @ref{World Coordinate System}). However, only more
-recent versions of WCSLIB also provide its version number. If the WCSLIB
-that is installed on the system provides its version (through the possibly
-existing @code{wcslib_version} function), this macro will have a value of
-one, otherwise it will have a value of zero.
+WCSLIB is the reference library for world coordinate system transformation 
(see @ref{WCSLIB} and @ref{World Coordinate System}).
+However, only more recent versions of WCSLIB also provide its version number.
+If the WCSLIB that is installed on the system provides its version (through 
the possibly existing @code{wcslib_version} function), this macro will have a 
value of one, otherwise it will have a value of zero.
 @end deffn
 
 @deffn Macro GAL_CONFIG_HAVE_PTHREAD_BARRIER
-The POSIX threads standard define barriers as an optional
-requirement. Therefore, some operating systems choose to not include it. As
-one of the @command{./configure} step checks, Gnuastro we check if your
-system has this POSIX thread barriers. If so, this macro will have a value
-of @code{1}, otherwise it will have a value of @code{0}. see
-@ref{Implementation of pthread_barrier} for more.
+The POSIX threads standard define barriers as an optional requirement.
+Therefore, some operating systems choose to not include it.
+As one of the @command{./configure} step checks, Gnuastro we check if your 
system has this POSIX thread barriers.
+If so, this macro will have a value of @code{1}, otherwise it will have a 
value of @code{0}.
+see @ref{Implementation of pthread_barrier} for more.
 @end deffn
 
 @cindex 32-bit
@@ -20095,10 +17650,9 @@ of @code{1}, otherwise it will have a value of 
@code{0}. see
 @cindex bit-64
 @deffn Macro GAL_CONFIG_SIZEOF_LONG
 @deffnx Macro GAL_CONFIG_SIZEOF_SIZE_T
-The size of (number of bytes in) the system's @code{long} and @code{size_t}
-types. Their values are commonly either 4 or 8 for 32-bit and 64-bit
-systems. You can also get this value with the expression `@code{sizeof
-size_t}' for example without having to include this header.
+The size of (number of bytes in) the system's @code{long} and @code{size_t} 
types.
+Their values are commonly either 4 or 8 for 32-bit and 64-bit systems.
+You can also get this value with the expression `@code{sizeof size_t}' for 
example without having to include this header.
 @end deffn
 
 @node Multithreaded programming, Library data types, Configuration 
information, Gnuastro library



reply via email to

[Prev in Thread] Current Thread [Next in Thread]