gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 73f7823: Book: source in new convention of one


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 73f7823: Book: source in new convention of one sentence per line (half complete)
Date: Mon, 16 Sep 2019 16:10:23 -0400 (EDT)

branch: master
commit 73f78235f29d04deeca2b7cfe201223a4623656f
Author: Mohammad Akhlaghi <address@hidden>
Commit: Mohammad Akhlaghi <address@hidden>

    Book: source in new convention of one sentence per line (half complete)
    
    Until now, the book's source is written with the policy of lines that are
    75 characters. Since the book is often read by many people and suggestions
    to improve it are alot, it is often edited. But following the fixed-width
    line convention would result in a whole paragraph changing because of
    changing one word for example. This doesn't fit into the logic of Git too
    nicely and thus doesn't make immediately clear what was changed in each
    commit.
    
    With this commit, the printed text of the book is now written in the
    convension of sentence per line. In this way, when a change is made in a
    sentence, only that sentence will be marked by Git and we'll have a much
    better control of the book's history.
    
    Almost half of the book has now been converted to this format, in due time,
    we can complete the rest (before any next release).
---
 doc/gnuastro.texi | 13914 ++++++++++++++++++----------------------------------
 1 file changed, 4854 insertions(+), 9060 deletions(-)

diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 3436fd6..8a56c57 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -1,4 +1,12 @@
 \input texinfo @c -*-texinfo-*-
+
+@c ONE SENTENCE PER LINE
+@c ---------------------
+@c For main printed text in this file, to allow easy tracking of history
+@c with Git, we are following a one-sentence-per-line convention.
+@c
+@c Since the manual is long, this is being done gradually from the start.
+
 @c %**start of header
 @setfilename gnuastro.info
 @settitle GNU Astronomy Utilities
@@ -25,19 +33,14 @@
 
 @c Copyright information:
 @copying
-This book documents version @value{VERSION} of the GNU Astronomy Utilities
-(Gnuastro). Gnuastro provides various programs and libraries for
-astronomical data manipulation and analysis.
+This book documents version @value{VERSION} of the GNU Astronomy Utilities 
(Gnuastro).
+Gnuastro provides various programs and libraries for astronomical data 
manipulation and analysis.
 
 Copyright @copyright{} 2015-2019 Free Software Foundation, Inc.
 
 @quotation
-Permission is granted to copy, distribute and/or modify this document under
-the terms of the GNU Free Documentation License, Version 1.3 or any later
-version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts.  A copy of the
-license is included in the section entitled ``GNU Free Documentation
-License''.
+Permission is granted to copy, distribute and/or modify this document under 
the terms of the GNU Free Documentation License, Version 1.3 or any later 
version published by the Free Software Foundation; with no Invariant Sections, 
no Front-Cover Texts, and no Back-Cover Texts.
+A copy of the license is included in the section entitled ``GNU Free 
Documentation License''.
 @end quotation
 @end copying
 
@@ -149,15 +152,8 @@ commits):
 @*
 @*
 @*
-For myself, I am interested in science and in philosophy only because I
-want to learn something about the riddle of the world in which we live, and
-the riddle of man's knowledge of that world. And I believe that only a
-revival of interest in these riddles can save the sciences and philosophy
-from narrow specialization and from an obscurantist faith in the expert's
-special skill, and in his personal knowledge and authority; a faith that so
-well fits our `post-rationalist' and `post-critical' age, proudly dedicated
-to the destruction of the tradition of rational philosophy, and of rational
-thought itself.
+For myself, I am interested in science and in philosophy only because I want 
to learn something about the riddle of the world in which we live, and the 
riddle of man's knowledge of that world.
+And I believe that only a revival of interest in these riddles can save the 
sciences and philosophy from narrow specialization and from an obscurantist 
faith in the expert's special skill, and in his personal knowledge and 
authority; a faith that so well fits our `post-rationalist' and `post-critical' 
age, proudly dedicated to the destruction of the tradition of rational 
philosophy, and of rational thought itself.
 @author Karl Popper. The logic of scientific discovery. 1959.
 @end quotation
 
@@ -184,13 +180,9 @@ thought itself.
 @insertcopying
 
 @ifhtml
-To navigate easily in this web page, you can use the @code{Next},
-@code{Previous}, @code{Up} and @code{Contents} links in the top and
-bottom of each page. @code{Next} and @code{Previous} will take you to
-the next or previous topic in the same level, for example from chapter
-1 to chapter 2 or vice versa. To go to the sections or subsections,
-you have to click on the menu entries that are there when ever a
-sub-component to a title is present.
+To navigate easily in this web page, you can use the @code{Next}, 
@code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of 
each page.
+@code{Next} and @code{Previous} will take you to the next or previous topic in 
the same level, for example from chapter 1 to chapter 2 or vice versa.
+To go to the sections or subsections, you have to click on the menu entries 
that are there when ever a sub-component to a title is present.
 @end ifhtml
 
 @end ifnottex
@@ -728,34 +720,19 @@ SAO ds9
 
 @cindex GNU coding standards
 @cindex GNU Astronomy Utilities (Gnuastro)
-GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of
-separate programs and libraries for the manipulation and analysis of
-astronomical data. All the programs share the same basic command-line user
-interface for the comfort of both the users and developers. Gnuastro is
-written to comply fully with the GNU coding standards so it integrates
-finely with the GNU/Linux operating system. This also enables astronomers
-to expect a fully familiar experience in the source code, building,
-installing and command-line user interaction that they have seen in all the
-other GNU software that they use. The official and always up to date
-version of this book (or manual) is freely available under @ref{GNU Free
-Doc. License} in various formats (PDF, HTML, plain text, info, and as its
-Texinfo source) at @url{http://www.gnu.org/software/gnuastro/manual/}.
-
-For users who are new to the GNU/Linux environment, unless otherwise
-specified most of the topics in @ref{Installation} and @ref{Common program
-behavior} are common to all GNU software, for example installation,
-managing command-line options or getting help (also see @ref{New to
-GNU/Linux?}). So if you are new to this empowering environment, we
-encourage you to go through these chapters carefully. They can be a
-starting point from which you can continue to learn more from each
-program's own manual and fully benefit from and enjoy this wonderful
-environment. Gnuastro also comes with a large set of libraries, so you can
-write your own programs using Gnuastro's building blocks, see @ref{Review
-of library fundamentals} for an introduction.
-
-In Gnuastro, no change to any program or library will be committed to its
-history, before it has been fully documented here first. As discussed in
-@ref{Science and its tools} this is a founding principle of the Gnuastro.
+GNU Astronomy Utilities (Gnuastro) is an official GNU package consisting of 
separate programs and libraries for the manipulation and analysis of 
astronomical data.
+All the programs share the same basic command-line user interface for the 
comfort of both the users and developers.
+Gnuastro is written to comply fully with the GNU coding standards so it 
integrates finely with the GNU/Linux operating system.
+This also enables astronomers to expect a fully familiar experience in the 
source code, building, installing and command-line user interaction that they 
have seen in all the other GNU software that they use.
+The official and always up to date version of this book (or manual) is freely 
available under @ref{GNU Free Doc. License} in various formats (PDF, HTML, 
plain text, info, and as its Texinfo source) at 
@url{http://www.gnu.org/software/gnuastro/manual/}.
+
+For users who are new to the GNU/Linux environment, unless otherwise specified 
most of the topics in @ref{Installation} and @ref{Common program behavior} are 
common to all GNU software, for example installation, managing command-line 
options or getting help (also see @ref{New to GNU/Linux?}).
+So if you are new to this empowering environment, we encourage you to go 
through these chapters carefully.
+They can be a starting point from which you can continue to learn more from 
each program's own manual and fully benefit from and enjoy this wonderful 
environment.
+Gnuastro also comes with a large set of libraries, so you can write your own 
programs using Gnuastro's building blocks, see @ref{Review of library 
fundamentals} for an introduction.
+
+In Gnuastro, no change to any program or library will be committed to its 
history, before it has been fully documented here first.
+As discussed in @ref{Science and its tools} this is a founding principle of 
the Gnuastro.
 
 @menu
 * Quick start::                 A quick start to installation.
@@ -783,32 +760,15 @@ history, before it has been fully documented here first. 
As discussed in
 @cindex GNU Tar
 @cindex Uncompress source
 @cindex Source, uncompress
-The latest official release tarball is always available as
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz,
-@file{gnuastro-latest.tar.gz}}. For better compression (faster download),
-and robust archival features, an @url{http://www.nongnu.org/lzip/lzip.html,
-Lzip} compressed tarball is also available at
-@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz,
-@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details
-on the tarball release@footnote{The Gzip library and program are commonly
-available on most systems. However, Gnuastro recommends Lzip as described
-above and the beta-releases are also only distributed in @file{tar.lz}. You
-can download and install Lzip's source (in @file{.tar.gz} format) from its
-webpage and follow the same process as below: Lzip has no dependencies, so
-simply decompress, then run @command{./configure}, @command{make},
-@command{sudo make install}.}.
-
-
-Let's assume the downloaded tarball is in the @file{TOPGNUASTRO}
-directory. The first two commands below can be used to decompress the
-source. If you download @file{tar.lz} and your Tar implementation doesn't
-recognize Lzip (the second command fails), run the third and fourth
-lines@footnote{In case Tar doesn't directly uncompress your @file{.tar.lz}
-tarball, you can merge the separate calls to Lzip and Tar (shown in the
-main body of text) into one command by directly piping the output of Lzip
-into Tar with a command like this: @command{$ lzip -cd gnuastro-0.5.tar.lz
-| tar -xf -}}. Note that lines starting with @code{##} don't need to be
-typed.
+The latest official release tarball is always available as 
@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.gz, 
@file{gnuastro-latest.tar.gz}}.
+For better compression (faster download), and robust archival features, an 
@url{http://www.nongnu.org/lzip/lzip.html, Lzip} compressed tarball is also 
available at @url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz, 
@file{gnuastro-latest.tar.lz}}, see @ref{Release tarball} for more details on 
the tarball release@footnote{The Gzip library and program are commonly 
available on most systems.
+However, Gnuastro recommends Lzip as described above and the beta-releases are 
also only distributed in @file{tar.lz}.
+You can download and install Lzip's source (in @file{.tar.gz} format) from its 
webpage and follow the same process as below: Lzip has no dependencies, so 
simply decompress, then run @command{./configure}, @command{make}, 
@command{sudo make install}.}.
+
+Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
+The first two commands below can be used to decompress the source.
+If you download @file{tar.lz} and your Tar implementation doesn't recognize 
Lzip (the second command fails), run the third and fourth lines@footnote{In 
case Tar doesn't directly uncompress your @file{.tar.lz} tarball, you can merge 
the separate calls to Lzip and Tar (shown in the main body of text) into one 
command by directly piping the output of Lzip into Tar with a command like 
this: @command{$ lzip -cd gnuastro-0.5.tar.lz | tar -xf -}}.
+Note that lines starting with @code{##} don't need to be typed.
 
 @example
 ## Go into the download directory.
@@ -822,13 +782,9 @@ $ lzip -d gnuastro-latest.tar.lz
 $ tar xf gnuastro-latest.tar
 @end example
 
-Gnuastro has three mandatory dependencies and some optional dependencies
-for extra functionality, see @ref{Dependencies} for the full list. In
-@ref{Dependencies from package managers} we have prepared the command to
-easily install Gnuastro's dependencies using the package manager of some
-operating systems. When the mandatory dependencies are ready, you can
-configure, compile, check and install Gnuastro on your system with the
-following commands.
+Gnuastro has three mandatory dependencies and some optional dependencies for 
extra functionality, see @ref{Dependencies} for the full list.
+In @ref{Dependencies from package managers} we have prepared the command to 
easily install Gnuastro's dependencies using the package manager of some 
operating systems.
+When the mandatory dependencies are ready, you can configure, compile, check 
and install Gnuastro on your system with the following commands.
 
 @example
 $ cd gnuastro-X.X                  # Replace X.X with version number.
@@ -839,16 +795,12 @@ $ sudo make install
 @end example
 
 @noindent
-See @ref{Known issues} if you confront any complications. For each program
-there is an `Invoke ProgramName' sub-section in this book which explains
-how the programs should be run on the command-line (for example
-@ref{Invoking asttable}). You can read the same section on the command-line
-by running @command{$ info astprogname} (for example @command{info
-asttable}). The `Invoke ProgramName' sub-section starts with a few examples
-of each program and goes on to explain the invocation details. See
-@ref{Getting help} for all the options you have to get help. In
-@ref{Tutorials} some real life examples of how these programs might be used
-are given.
+See @ref{Known issues} if you confront any complications.
+For each program there is an `Invoke ProgramName' sub-section in this book 
which explains how the programs should be run on the command-line (for example 
@ref{Invoking asttable}).
+You can read the same section on the command-line by running @command{$ info 
astprogname} (for example @command{info asttable}).
+The `Invoke ProgramName' sub-section starts with a few examples of each 
program and goes on to explain the invocation details.
+See @ref{Getting help} for all the options you have to get help.
+In @ref{Tutorials} some real life examples of how these programs might be used 
are given.
 
 
 
@@ -860,136 +812,75 @@ are given.
 @node Science and its tools, Your rights, Quick start, Introduction
 @section Science and its tools
 
-History of science indicates that there are always inevitably unseen
-faults, hidden assumptions, simplifications and approximations in all
-our theoretical models, data acquisition and analysis techniques. It
-is precisely these that will ultimately allow future generations to
-advance the existing experimental and theoretical knowledge through
-their new solutions and corrections.
-
-In the past, scientists would gather data and process them individually to
-achieve an analysis thus having a much more intricate knowledge of the data
-and analysis. The theoretical models also required little (if any)
-simulations to compare with the data. Today both methods are becoming
-increasingly more dependent on pre-written software. Scientists are
-dissociating themselves from the intricacies of reducing raw observational
-data in experimentation or from bringing the theoretical models to life in
-simulations. These `intricacies' are precisely those unseen faults, hidden
-assumptions, simplifications and approximations that define scientific
-progress.
+History of science indicates that there are always inevitably unseen faults, 
hidden assumptions, simplifications and approximations in all our theoretical 
models, data acquisition and analysis techniques.
+It is precisely these that will ultimately allow future generations to advance 
the existing experimental and theoretical knowledge through their new solutions 
and corrections.
+
+In the past, scientists would gather data and process them individually to 
achieve an analysis thus having a much more intricate knowledge of the data and 
analysis.
+The theoretical models also required little (if any) simulations to compare 
with the data.
+Today both methods are becoming increasingly more dependent on pre-written 
software.
+Scientists are dissociating themselves from the intricacies of reducing raw 
observational data in experimentation or from bringing the theoretical models 
to life in simulations.
+These `intricacies' are precisely those unseen faults, hidden assumptions, 
simplifications and approximations that define scientific progress.
 
 @quotation
 @cindex Anscombe F. J.
-Unfortunately, most persons who have recourse to a computer for
-statistical analysis of data are not much interested either in
-computer programming or in statistical method, being primarily
-concerned with their own proper business. Hence the common use of
-library programs and various statistical packages. ... It's time that
-was changed.
-@author F. J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
+Unfortunately, most persons who have recourse to a computer for statistical 
analysis of data are not much interested either in computer programming or in 
statistical method, being primarily concerned with their own proper business.
+Hence the common use of library programs and various statistical packages. ... 
It's time that was changed.
+@author F.J. Anscombe. The American Statistician, Vol. 27, No. 1. 1973
 @end quotation
 
 @cindex Anscombe's quartet
 @cindex Statistical analysis
-@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet}
-demonstrates how four data sets with widely different shapes (when plotted)
-give nearly identical output from standard regression techniques. Anscombe
-uses this (now famous) quartet, which was introduced in the paper quoted
-above, to argue that ``@emph{Good statistical analysis is not a purely
-routine matter, and generally calls for more than one pass through the
-computer}''. Echoing Anscombe's concern after 44 years, some of the highly
-recognized statisticians of our time (Leek, McShane, Gelman, Colquhoun,
-Nuijten and Goodman), wrote in Nature that:
+@url{http://en.wikipedia.org/wiki/Anscombe%27s_quartet,Anscombe's quartet} 
demonstrates how four data sets with widely different shapes (when plotted) 
give nearly identical output from standard regression techniques.
+Anscombe uses this (now famous) quartet, which was introduced in the paper 
quoted above, to argue that ``@emph{Good statistical analysis is not a purely 
routine matter, and generally calls for more than one pass through the 
computer}''.
+Echoing Anscombe's concern after 44 years, some of the highly recognized 
statisticians of our time (Leek, McShane, Gelman, Colquhoun, Nuijten and 
Goodman), wrote in Nature that:
 
 @quotation
-We need to appreciate that data analysis is not purely computational and
-algorithmic — it is a human behaviour....Researchers who hunt hard enough
-will turn up a result that fits statistical criteria — but their discovery
-will probably be a false positive.
+We need to appreciate that data analysis is not purely computational and 
algorithmic -- it is a human behaviour....Researchers who hunt hard enough will 
turn up a result that fits statistical criteria -- but their discovery will 
probably be a false positive.
 @author Five ways to fix statistics, Nature, 551, Nov 2017.
 @end quotation
 
-Users of statistical (scientific) methods (software) are therefore not
-passive (objective) agents in their result. Therefore, it is necessary to
-actually understand the method, not just use it as a black box. The
-subjective experience gained by frequently using a method/software is not
-sufficient to claim an understanding of how the tool/method works and how
-relevant it is to the data and analysis. This kind of subjective experience
-is prone to serious misunderstandings about the data, what the
-software/statistical-method really does (especially as it gets more
-complicated), and thus the scientific interpretation of the result. This
-attitude is further encouraged through non-free
-software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}},
-poorly written (or non-existent) scientific software manuals, and
-non-reproducible papers@footnote{Where the authors omit many of the
-analysis/processing ``details'' from the paper by arguing that they would
-make the paper too long/unreadable. However, software engineers have been
-dealing with such issues for a long time. There are thus software
-management solutions that allow us to supplement papers with all the
-details necessary to exactly reproduce the result. For example see
-@url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} and
-@url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{
-http://akhlaghi.org/reproducible-science.html, general discussion}.}. This
-approach to scientific software and methods only helps in producing dogmas
-and an ``@emph{obscurantist faith in the expert's special skill, and in his
-personal knowledge and authority}''@footnote{Karl Popper. The logic of
-scientific discovery. 1959. Larger quote is given at the start of the PDF
-(for print) version of this book.}.
+Users of statistical (scientific) methods (software) are therefore not passive 
(objective) agents in their result.
+Therefore, it is necessary to actually understand the method, not just use it 
as a black box.
+The subjective experience gained by frequently using a method/software is not 
sufficient to claim an understanding of how the tool/method works and how 
relevant it is to the data and analysis.
+This kind of subjective experience is prone to serious misunderstandings about 
the data, what the software/statistical-method really does (especially as it 
gets more complicated), and thus the scientific interpretation of the result.
+This attitude is further encouraged through non-free 
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly 
written (or non-existent) scientific software manuals, and non-reproducible 
papers@footnote{Where the authors omit many of the analysis/processing 
``details'' from the paper by arguing that they would make the paper too 
long/unreadable.
+However, software engineers have been dealing with such issues for a long time.
+There are thus software management solutions that allow us to supplement 
papers with all the details necessary to exactly reproduce the result.
+For example see @url{https://doi.org/10.5281/zenodo.1163746, zenodo.1163746} 
and @url{https://doi.org/10.5281/zenodo.1164774, zenodo.1164774} and this @url{ 
http://akhlaghi.org/reproducible-science.html, general discussion}.}.
+This approach to scientific software and methods only helps in producing 
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in 
his personal knowledge and authority}''@footnote{Karl Popper. The logic of 
scientific discovery. 1959.
+Larger quote is given at the start of the PDF (for print) version of this 
book.}.
 
 @quotation
 @cindex Douglas Rushkoff
-Program or be programmed. Choose the former, and you gain access to
-the control panel of civilization. Choose the latter, and it could be
-the last real choice you get to make.
+Program or be programmed.
+Choose the former, and you gain access to the control panel of civilization.
+Choose the latter, and it could be the last real choice you get to make.
 @author Douglas Rushkoff. Program or be programmed, O/R Books (2010).
 @end quotation
 
-It is obviously impractical for any one human being to gain the intricate
-knowledge explained above for every step of an analysis. On the other hand,
-scientific data can be large and numerous, for example images produced by
-telescopes in astronomy. This requires efficient algorithms. To make things
-worse, natural scientists have generally not been trained in the advanced
-software techniques, paradigms and architecture that are taught in computer
-science or engineering courses and thus used in most software. The GNU
-Astronomy Utilities are an effort to tackle this issue.
-
-Gnuastro is not just a software, this book is as important to the idea
-behind Gnuastro as the source code (software). This book has tried to learn
-from the success of the ``Numerical Recipes'' book in educating those who
-are not software engineers and computer scientists but still heavy users of
-computational algorithms, like astronomers. There are two major
-differences.
-
-The first difference is that Gnuastro's code and the background information
-are segregated: the code is moved within the actual Gnuastro software
-source code and the underlying explanations are given here in this book. In
-the source code, every non-trivial step is heavily commented and correlated
-with this book, it follows the same logic of this book, and all the
-programs follow a similar internal data, function and file structure, see
-@ref{Program source}. Complementing the code, this book focuses on
-thoroughly explaining the concepts behind those codes (history,
-mathematics, science, software and usage advise when necessary) along with
-detailed instructions on how to run the programs. At the expense of
-frustrating ``professionals'' or ``experts'', this book and the comments in
-the code also intentionally avoid jargon and abbreviations. The source code
-and this book are thus intimately linked, and when considered as a single
-entity can be thought of as a real (an actual software accompanying the
-algorithms) ``Numerical Recipes'' for astronomy.
+It is obviously impractical for any one human being to gain the intricate 
knowledge explained above for every step of an analysis.
+On the other hand, scientific data can be large and numerous, for example 
images produced by telescopes in astronomy.
+This requires efficient algorithms.
+To make things worse, natural scientists have generally not been trained in 
the advanced software techniques, paradigms and architecture that are taught in 
computer science or engineering courses and thus used in most software.
+The GNU Astronomy Utilities are an effort to tackle this issue.
+
+Gnuastro is not just a software, this book is as important to the idea behind 
Gnuastro as the source code (software).
+This book has tried to learn from the success of the ``Numerical Recipes'' 
book in educating those who are not software engineers and computer scientists 
but still heavy users of computational algorithms, like astronomers.
+There are two major differences.
+
+The first difference is that Gnuastro's code and the background information 
are segregated: the code is moved within the actual Gnuastro software source 
code and the underlying explanations are given here in this book.
+In the source code, every non-trivial step is heavily commented and correlated 
with this book, it follows the same logic of this book, and all the programs 
follow a similar internal data, function and file structure, see @ref{Program 
source}.
+Complementing the code, this book focuses on thoroughly explaining the 
concepts behind those codes (history, mathematics, science, software and usage 
advise when necessary) along with detailed instructions on how to run the 
programs.
+At the expense of frustrating ``professionals'' or ``experts'', this book and 
the comments in the code also intentionally avoid jargon and abbreviations.
+The source code and this book are thus intimately linked, and when considered 
as a single entity can be thought of as a real (an actual software accompanying 
the algorithms) ``Numerical Recipes'' for astronomy.
 
 @cindex GNU free documentation license
 @cindex GNU General Public License (GPL)
-The second major, and arguably more important, difference is that
-``Numerical Recipes'' does not allow you to distribute any code that you
-have learned from it. In other words, it does not allow you to release your
-software's source code if you have used their codes, you can only publicly
-release binaries (a black box) to the community. Therefore, while it
-empowers the privileged individual who has access to it, it exacerbates
-social ignorance. Exactly at the opposite end of the spectrum, Gnuastro's
-source code is released under the GNU general public license (GPL) and this
-book is released under the GNU free documentation license. You are
-therefore free to distribute any software you create using parts of
-Gnuastro's source code or text, or figures from this book, see @ref{Your
-rights}.
+The second major, and arguably more important, difference is that ``Numerical 
Recipes'' does not allow you to distribute any code that you have learned from 
it.
+In other words, it does not allow you to release your software's source code 
if you have used their codes, you can only publicly release binaries (a black 
box) to the community.
+Therefore, while it empowers the privileged individual who has access to it, 
it exacerbates social ignorance.
+Exactly at the opposite end of the spectrum, Gnuastro's source code is 
released under the GNU general public license (GPL) and this book is released 
under the GNU free documentation license.
+You are therefore free to distribute any software you create using parts of 
Gnuastro's source code or text, or figures from this book, see @ref{Your 
rights}.
 
 With these principles in mind, Gnuastro's developers aim to impose the
 minimum requirements on you (in computer science, engineering and even the
@@ -999,77 +890,41 @@ philosophy}.
 
 @cindex Brahe, Tycho
 @cindex Galileo, Galilei
-Without prior familiarity and experience with optics, it is hard to imagine
-how, Galileo could have come up with the idea of modifying the Dutch
-military telescope optics to use in astronomy. Astronomical objects could
-not be seen with the Dutch military design of the telescope. In other
-words, it is unlikely that Galileo could have asked a random optician to
-make modifications (not understood by Galileo) to the Dutch design, to do
-something no astronomer of the time took seriously. In the paradigm of the
-day, what could be the purpose of enlarging geometric spheres (planets) or
-points (stars)? In that paradigm only the position and movement of the
-heavenly bodies was important, and that had already been accurately studied
-(recently by Tycho Brahe).
-
-In the beginning of his ``The Sidereal Messenger'' (published in 1610) he
-cautions the readers on this issue and @emph{before} describing his
-results/observations, Galileo instructs us on how to build a suitable
-instrument. Without a detailed description of @emph{how} he made his tools
-and done his observations, no reasonable person would believe his
-results. Before he actually saw the moons of Jupiter, the mountains on the
-Moon or the crescent of Venus, Galileo was “evasive”@footnote{Galileo
-G. (Translated by Maurice A. Finocchiaro). @emph{The essential
-Galileo}. Hackett publishing company, first edition, 2008.} to
-Kepler. Science is defined by its tools/methods, @emph{not} its raw
-results@footnote{For example, take the following two results on the age of
-the universe: roughly 14 billion years (suggested by the current consensus
-of the standard model of cosmology) and less than 10,000 years (suggested
-from some interpretations of the Bible). Both these numbers are
-@emph{results}. What distinguishes these two results, is the tools/methods
-that were used to derive them. Therefore, as the term ``Scientific method''
-also signifies, a scientific statement it defined by its @emph{method}, not
-its result.}.
-
-The same is true today: science cannot progress with a black box, or poorly
-released code. The source code of a research is the new (abstractified)
-communication language in science, understandable by humans @emph{and}
-computers. Source code (in any programming language) is a language/notation
-designed to express all the details that would be too
-tedious/long/frustrating to report in spoken languages like English,
-similar to mathematic notation.
-
-Today, the quality of the source code that goes into a scientific result
-(and the distribution of that code) is as critical to scientific vitality
-and integrity, as the quality of its written language/English used in
-publishing/distributing its paper. A scientific paper will not even be
-reviewed by any respectable journal if its written in a poor
-language/English. A similar level of quality assessment is thus
-increasingly becoming necessary regarding the codes/methods used to derive
-the results of a scientific paper.
+Without prior familiarity and experience with optics, it is hard to imagine 
how, Galileo could have come up with the idea of modifying the Dutch military 
telescope optics to use in astronomy.
+Astronomical objects could not be seen with the Dutch military design of the 
telescope.
+In other words, it is unlikely that Galileo could have asked a random optician 
to make modifications (not understood by Galileo) to the Dutch design, to do 
something no astronomer of the time took seriously.
+In the paradigm of the day, what could be the purpose of enlarging geometric 
spheres (planets) or points (stars)? In that paradigm only the position and 
movement of the heavenly bodies was important, and that had already been 
accurately studied (recently by Tycho Brahe).
+
+In the beginning of his ``The Sidereal Messenger'' (published in 1610) he 
cautions the readers on this issue and @emph{before} describing his 
results/observations, Galileo instructs us on how to build a suitable 
instrument.
+Without a detailed description of @emph{how} he made his tools and done his 
observations, no reasonable person would believe his results.
+Before he actually saw the moons of Jupiter, the mountains on the Moon or the 
crescent of Venus, Galileo was “evasive”@footnote{Galileo G. (Translated by 
Maurice A. Finocchiaro). @emph{The essential Galileo}.Hackett publishing 
company, first edition, 2008.} to Kepler.
+Science is defined by its tools/methods, @emph{not} its raw 
results@footnote{For example, take the following two results on the age of the 
universe: roughly 14 billion years (suggested by the current consensus of the 
standard model of cosmology) and less than 10,000 years (suggested from some 
interpretations of the Bible).
+Both these numbers are @emph{results}.
+What distinguishes these two results, is the tools/methods that were used to 
derive them.
+Therefore, as the term ``Scientific method'' also signifies, a scientific 
statement it defined by its @emph{method}, not its result.}.
+
+The same is true today: science cannot progress with a black box, or poorly 
released code.
+The source code of a research is the new (abstractified) communication 
language in science, understandable by humans @emph{and} computers.
+Source code (in any programming language) is a language/notation designed to 
express all the details that would be too tedious/long/frustrating to report in 
spoken languages like English, similar to mathematic notation.
+
+Today, the quality of the source code that goes into a scientific result (and 
the distribution of that code) is as critical to scientific vitality and 
integrity, as the quality of its written language/English used in 
publishing/distributing its paper.
+A scientific paper will not even be reviewed by any respectable journal if its 
written in a poor language/English.
+A similar level of quality assessment is thus increasingly becoming necessary 
regarding the codes/methods used to derive the results of a scientific paper.
 
 @cindex Ken Thomson
 @cindex Stroustrup, Bjarne
-Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without
-understanding software, you are reduced to believing in magic}''.  Ken
-Thomson (the designer or the Unix operating system) says ``@emph{I abhor a
-system designed for the `user' if that word is a coded pejorative meaning
-`stupid and unsophisticated'}.'' Certainly no scientist (user of a
-scientific software) would want to be considered a believer in magic, or
-stupid and unsophisticated.
-
-This can happen when scientists get too distant from the raw data and
-methods, and are mainly discussing results. In other words, when they feel
-they have tamed Nature into their own high-level (abstract) models
-(creations), and are mainly concerned with scaling up, or industrializing
-those results. Roughly five years before special relativity, and about two
-decades before quantum mechanics fundamentally changed Physics, Lord Kelvin
-is quoted as saying:
+Bjarne Stroustrup (creator of the C++ language) says: ``@emph{Without 
understanding software, you are reduced to believing in magic}''.
+Ken Thomson (the designer or the Unix operating system) says ``@emph{I abhor a 
system designed for the `user' if that word is a coded pejorative meaning 
`stupid and unsophisticated'}.'' Certainly no scientist (user of a scientific 
software) would want to be considered a believer in magic, or stupid and 
unsophisticated.
+
+This can happen when scientists get too distant from the raw data and methods, 
and are mainly discussing results.
+In other words, when they feel they have tamed Nature into their own 
high-level (abstract) models (creations), and are mainly concerned with scaling 
up, or industrializing those results.
+Roughly five years before special relativity, and about two decades before 
quantum mechanics fundamentally changed Physics, Lord Kelvin is quoted as 
saying:
 
 @quotation
 @cindex Lord Kelvin
 @cindex William Thomson
-There is nothing new to be discovered in physics now. All that remains
-is more and more precise measurement.
+There is nothing new to be discovered in physics now.
+All that remains is more and more precise measurement.
 @author William Thomson (Lord Kelvin), 1900
 @end quotation
 
@@ -1079,36 +934,21 @@ A few years earlier Albert. A. Michelson made the 
following statement:
 @quotation
 @cindex Albert. A. Michelson
 @cindex Michelson, Albert. A.
-The more important fundamental laws and facts of physical science have
-all been discovered, and these are now so firmly established that the
-possibility of their ever being supplanted in consequence of new
-discoveries is exceedingly remote.... Our future discoveries must be
-looked for in the sixth place of decimals.
+The more important fundamental laws and facts of physical science have all 
been discovered, and these are now so firmly established that the possibility 
of their ever being supplanted in consequence of new discoveries is exceedingly 
remote....
+Our future discoveries must be looked for in the sixth place of decimals.
 @author Albert. A. Michelson, dedication of Ryerson Physics Lab, U. Chicago 
1894
 @end quotation
 
 @cindex Puzzle solving scientist
 @cindex Scientist, puzzle solver
-If scientists are considered to be more than mere ``puzzle''
-solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific
-Revolutions}, University of Chicago Press, 1962.} (simply adding to the
-decimals of existing values or observing a feature in 10, 100, or 100000
-more galaxies or stars, as Kelvin and Michelson clearly believed), they
-cannot just passively sit back and uncritically repeat the previous
-(observational or theoretical) methods/tools on new data. Today there is a
-wealth of raw telescope images ready (mostly for free) at the finger tips
-of anyone who is interested with a fast enough internet connection to
-download them. The only thing lacking is new ways to analyze this data and
-dig out the treasure that is lying hidden in them to existing methods and
-techniques.
+If scientists are considered to be more than mere ``puzzle'' 
solvers@footnote{Thomas S. Kuhn. @emph{The Structure of Scientific 
Revolutions}, University of Chicago Press, 1962.} (simply adding to the 
decimals of existing values or observing a feature in 10, 100, or 100000 more 
galaxies or stars, as Kelvin and Michelson clearly believed), they cannot just 
passively sit back and uncritically repeat the previous (observational or 
theoretical) methods/tools on new data.
+Today there is a wealth of raw telescope images ready (mostly for free) at the 
finger tips of anyone who is interested with a fast enough internet connection 
to download them.
+The only thing lacking is new ways to analyze this data and dig out the 
treasure that is lying hidden in them to existing methods and techniques.
 
 @quotation
 @cindex Jaynes E. T.
-New data that we insist on analyzing in terms of old ideas (that is,
-old models which are not questioned) cannot lead us out of the old
-ideas. However many data we record and analyze, we may just keep
-repeating the same old errors, missing the same crucially important
-things that the experiment was competent to find.
+New data that we insist on analyzing in terms of old ideas (that is, old 
models which are not questioned) cannot lead us out of the old ideas.
+However many data we record and analyze, we may just keep repeating the same 
old errors, missing the same crucially important things that the experiment was 
competent to find.
 @author Jaynes, Probability theory, the logic of science. Cambridge U. Press 
(2003).
 @end quotation
 
@@ -1119,52 +959,29 @@ things that the experiment was competent to find.
 @section Your rights
 
 @cindex GNU Texinfo
-The paragraphs below, in this section, belong to the GNU
-Texinfo@footnote{Texinfo is the GNU documentation system. It is used
-to create this book in all the various formats.} manual and are not
-written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy
-Utilities'' or ``Gnuastro'' because they are released under the same
-licenses and it is beautifully written to inform you of your rights.
+The paragraphs below, in this section, belong to the GNU 
Texinfo@footnote{Texinfo is the GNU documentation system.
+It is used to create this book in all the various formats.} manual and are not 
written by us! The name ``Texinfo'' is just changed to ``GNU Astronomy 
Utilities'' or ``Gnuastro'' because they are released under the same licenses 
and it is beautifully written to inform you of your rights.
 
 @cindex Free software
 @cindex Copyright
 @cindex Public domain
-GNU Astronomy Utilities is ``free software''; this means that everyone
-is free to use it and free to redistribute it on certain
-conditions. Gnuastro is not in the public domain; it is copyrighted
-and there are restrictions on its distribution, but these restrictions
-are designed to permit everything that a good cooperating citizen
-would want to do.  What is not allowed is to try to prevent others
-from further sharing any version of Gnuastro that they might get from
-you.
-
-Specifically, we want to make sure that you have the right to give
-away copies of the programs that relate to Gnuastro, that you receive
-the source code or else can get it if you want it, that you can change
-these programs or use pieces of them in new free programs, and that
-you know you can do these things.
-
-To make sure that everyone has such rights, we have to forbid you to
-deprive anyone else of these rights.  For example, if you distribute
-copies of the Gnuastro related programs, you must give the recipients
-all the rights that you have.  You must make sure that they, too,
-receive or can get the source code.  And you must tell them their
-rights.
-
-Also, for our own protection, we must make certain that everyone finds
-out that there is no warranty for the programs that relate to Gnuastro.
-If these programs are modified by someone else and passed on, we want
-their recipients to know that what they have is not what we distributed,
-so that any problems introduced by others will not reflect on our
-reputation.
+GNU Astronomy Utilities is ``free software''; this means that everyone is free 
to use it and free to redistribute it on certain conditions.
+Gnuastro is not in the public domain; it is copyrighted and there are 
restrictions on its distribution, but these restrictions are designed to permit 
everything that a good cooperating citizen would want to do.
+What is not allowed is to try to prevent others from further sharing any 
version of Gnuastro that they might get from you.
+
+Specifically, we want to make sure that you have the right to give away copies 
of the programs that relate to Gnuastro, that you receive the source code or 
else can get it if you want it, that you can change these programs or use 
pieces of them in new free programs, and that you know you can do these things.
+
+To make sure that everyone has such rights, we have to forbid you to deprive 
anyone else of these rights.
+For example, if you distribute copies of the Gnuastro related programs, you 
must give the recipients all the rights that you have.
+You must make sure that they, too, receive or can get the source code.
+And you must tell them their rights.
+
+Also, for our own protection, we must make certain that everyone finds out 
that there is no warranty for the programs that relate to Gnuastro.
+If these programs are modified by someone else and passed on, we want their 
recipients to know that what they have is not what we distributed, so that any 
problems introduced by others will not reflect on our reputation.
 
 @cindex GNU General Public License (GPL)
 @cindex GNU Free Documentation License
-The full text of the licenses for the Gnuastro book and software can be
-respectively found in @ref{GNU General Public License}@footnote{Also
-available in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free
-Doc. License}@footnote{Also available in
-@url{http://www.gnu.org/copyleft/fdl.html}}.
+The full text of the licenses for the Gnuastro book and software can be 
respectively found in @ref{GNU General Public License}@footnote{Also available 
in @url{http://www.gnu.org/copyleft/gpl.html}} and @ref{GNU Free Doc. 
License}@footnote{Also available in @url{http://www.gnu.org/copyleft/fdl.html}}.
 
 
 
@@ -1173,24 +990,17 @@ Doc. License}@footnote{Also available in
 
 @cindex Names, programs
 @cindex Program names
-Gnuastro is a package of independent programs and a collection of
-libraries, here we are mainly concerned with the programs. Each program has
-an official name which consists of one or two words, describing what they
-do. The latter are printed with no space, for example NoiseChisel or
-Crop. On the command-line, you can run them with their executable names
-which start with an @file{ast} and might be an abbreviation of the official
-name, for example @file{astnoisechisel} or @file{astcrop}, see
-@ref{Executable names}.
+Gnuastro is a package of independent programs and a collection of libraries, 
here we are mainly concerned with the programs.
+Each program has an official name which consists of one or two words, 
describing what they do.
+The latter are printed with no space, for example NoiseChisel or Crop.
+On the command-line, you can run them with their executable names which start 
with an @file{ast} and might be an abbreviation of the official name, for 
example @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
 
 @pindex ProgramName
 @pindex @file{astprogname}
-We will use ``ProgramName'' for a generic official program name and
-@file{astprogname} for a generic executable name. In this book, the
-programs are classified based on what they do and thoroughly explained. An
-alphabetical list of the programs that are installed on your system with
-this installation are given in @ref{Gnuastro programs list}. That list also
-contains the executable names and version numbers along with a one line
-description.
+We will use ``ProgramName'' for a generic official program name and 
@file{astprogname} for a generic executable name.
+In this book, the programs are classified based on what they do and thoroughly 
explained.
+An alphabetical list of the programs that are installed on your system with 
this installation are given in @ref{Gnuastro programs list}.
+That list also contains the executable names and version numbers along with a 
one line description.
 
 
 
@@ -1202,54 +1012,31 @@ description.
 @cindex Major version number
 @cindex Minor version number
 @cindex Mailing list: info-gnuastro
-Gnuastro can have two formats of version numbers, for official and
-unofficial releases. Official Gnuastro releases are announced on the
-@command{info-gnuastro} mailing list, they have a version control tag in
-Gnuastro's development history, and their version numbers are formatted
-like ``@file{A.B}''. @file{A} is a major version number, marking a
-significant planned achievement (for example see @ref{GNU Astronomy
-Utilities 1.0}), while @file{B} is a minor version number, see below for
-more on the distinction. Note that the numbers are not decimals, so version
-2.34 is much more recent than version 2.5, which is not equal to 2.50.
-
-Gnuastro also allows a unique version number for unofficial
-releases. Unofficial releases can mark any point in Gnuastro's development
-history. This is done to allow astronomers to easily use any point in the
-version controlled history for their data-analysis and research
-publication. See @ref{Version controlled source} for a complete
-introduction. This section is not just for developers and is intended to
-straightforward and easy to read, so please have a look if you are
-interested in the cutting-edge. This unofficial version number is a
-meaningful and easy to read string of characters, unique to that particular
-point of history. With this feature, users can easily stay up to date with
-the most recent bug fixes and additions that are committed between official
-releases.
-
-The unofficial version number is formatted like: @file{A.B.C-D}. @file{A}
-and @file{B} are the most recent official version number. @file{C} is the
-number of commits that have been made after version @file{A.B}. @file{D} is
-the first 4 or 5 characters of the commit hash number@footnote{Each point
-in Gnuastro's history is uniquely identified with a 40 character long hash
-which is created from its contents and previous history for example:
-@code{5b17501d8f29ba3cd610673261e6e2229c846d35}. So the string @file{D} in
-the version for this commit could be @file{5b17}, or
-@file{5b175}.}. Therefore, the unofficial version number
-`@code{3.92.8-29c8}', corresponds to the 8th commit after the official
-version @code{3.92} and its commit hash begins with @code{29c8}. The
-unofficial version number is sort-able (unlike the raw hash) and as shown
-above is descriptive of the state of the unofficial release. Of course an
-official release is preferred for publication (since its tarballs are
-easily available and it has gone through more tests, making it more
-stable), so if an official release is announced prior to your publication's
-final review, please consider updating to the official release.
-
-The major version number is set by a major goal which is defined by the
-developers and user community before hand, for example see @ref{GNU
-Astronomy Utilities 1.0}. The incremental work done in minor releases are
-commonly small steps in achieving the major goal. Therefore, there is no
-limit on the number of minor releases and the difference between the
-(hypothetical) versions 2.927 and 3.0 can be a small (negligible to the
-user) improvement that finalizes the defined goals.
+Gnuastro can have two formats of version numbers, for official and unofficial 
releases.
+Official Gnuastro releases are announced on the @command{info-gnuastro} 
mailing list, they have a version control tag in Gnuastro's development 
history, and their version numbers are formatted like ``@file{A.B}''.
+@file{A} is a major version number, marking a significant planned achievement 
(for example see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor 
version number, see below for more on the distinction.
+Note that the numbers are not decimals, so version 2.34 is much more recent 
than version 2.5, which is not equal to 2.50.
+
+Gnuastro also allows a unique version number for unofficial releases.
+Unofficial releases can mark any point in Gnuastro's development history.
+This is done to allow astronomers to easily use any point in the version 
controlled history for their data-analysis and research publication.
+See @ref{Version controlled source} for a complete introduction.
+This section is not just for developers and is intended to straightforward and 
easy to read, so please have a look if you are interested in the cutting-edge.
+This unofficial version number is a meaningful and easy to read string of 
characters, unique to that particular point of history.
+With this feature, users can easily stay up to date with the most recent bug 
fixes and additions that are committed between official releases.
+
+The unofficial version number is formatted like: @file{A.B.C-D}.
+@file{A} and @file{B} are the most recent official version number.
+@file{C} is the number of commits that have been made after version @file{A.B}.
+@file{D} is the first 4 or 5 characters of the commit hash 
number@footnote{Each point in Gnuastro's history is uniquely identified with a 
40 character long hash which is created from its contents and previous history 
for example: @code{5b17501d8f29ba3cd610673261e6e2229c846d35}.
+So the string @file{D} in the version for this commit could be @file{5b17}, or 
@file{5b175}.}.
+Therefore, the unofficial version number `@code{3.92.8-29c8}', corresponds to 
the 8th commit after the official version @code{3.92} and its commit hash 
begins with @code{29c8}.
+The unofficial version number is sort-able (unlike the raw hash) and as shown 
above is descriptive of the state of the unofficial release.
+Of course an official release is preferred for publication (since its tarballs 
are easily available and it has gone through more tests, making it more 
stable), so if an official release is announced prior to your publication's 
final review, please consider updating to the official release.
+
+The major version number is set by a major goal which is defined by the 
developers and user community before hand, for example see @ref{GNU Astronomy 
Utilities 1.0}.
+The incremental work done in minor releases are commonly small steps in 
achieving the major goal.
+Therefore, there is no limit on the number of minor releases and the 
difference between the (hypothetical) versions 2.927 and 3.0 can be a small 
(negligible to the user) improvement that finalizes the defined goals.
 
 @menu
 * GNU Astronomy Utilities 1.0::  Plans for version 1.0 release
@@ -1258,34 +1045,21 @@ user) improvement that finalizes the defined goals.
 @node GNU Astronomy Utilities 1.0,  , Version numbering, Version numbering
 @subsection GNU Astronomy Utilities 1.0
 @cindex Gnuastro major version number
-Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a
-complete system for data manipulation and analysis at least similar to
-IRAF@footnote{@url{http://iraf.noao.edu/}}. So an astronomer can take all
-the standard data analysis steps (starting from raw data to the final
-reduced product and standard post-reduction tools) with the various
-programs in Gnuastro.
+Currently (prior to Gnuastro 1.0), the aim of Gnuastro is to have a complete 
system for data manipulation and analysis at least similar to 
IRAF@footnote{@url{http://iraf.noao.edu/}}.
+So an astronomer can take all the standard data analysis steps (starting from 
raw data to the final reduced product and standard post-reduction tools) with 
the various programs in Gnuastro.
 
 @cindex Shell script
-The maintainers of each camera or detector on a telescope can provide a
-completely transparent shell script or Makefile to the observer for data
-analysis. This script can set configuration files for all the required
-programs to work with that particular camera. The script can then run the
-proper programs in the proper sequence. The user/observer can easily follow
-the standard shell script to understand (and modify) each step and the
-parameters used easily. Bash (or other modern GNU/Linux shell scripts) is
-powerful and made for this gluing job. This will simultaneously improve
-performance and transparency. Shell scripting (or Makefiles) are also basic
-constructs that are easy to learn and readily available as part of the
-Unix-like operating systems. If there is no program to do a desired step,
-Gnuastro's libraries can be used to build specific programs.
+The maintainers of each camera or detector on a telescope can provide a 
completely transparent shell script or Makefile to the observer for data 
analysis.
+This script can set configuration files for all the required programs to work 
with that particular camera.
+The script can then run the proper programs in the proper sequence.
+The user/observer can easily follow the standard shell script to understand 
(and modify) each step and the parameters used easily.
+Bash (or other modern GNU/Linux shell scripts) is powerful and made for this 
gluing job.
+This will simultaneously improve performance and transparency.
+Shell scripting (or Makefiles) are also basic constructs that are easy to 
learn and readily available as part of the Unix-like operating systems.
+If there is no program to do a desired step, Gnuastro's libraries can be used 
to build specific programs.
 
-The main factor is that all observatories or projects can freely contribute
-to Gnuastro and all simultaneously benefit from it (since it doesn't belong
-to any particular one of them), much like how for-profit organizations (for
-example RedHat, or Intel and many others) are major contributors to free
-and open source software for their shared benefit. Gnuastro's copyright has
-been fully awarded to GNU, so it doesn't belong to any particular
-astronomer or astronomical facility or project.
+The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it doesn't belong to any 
particular one of them), much like how for-profit organizations (for example 
RedHat, or Intel and many others) are major contributors to free and open 
source software for their shared benefit.
+Gnuastro's copyright has been fully awarded to GNU, so it doesn't belong to 
any particular astronomer or astronomical facility or project.
 
 
 
@@ -1294,24 +1068,15 @@ astronomer or astronomical facility or project.
 @node New to GNU/Linux?, Report a bug, Version numbering, Introduction
 @section New to GNU/Linux?
 
-Some astronomers initially install and use a GNU/Linux operating system
-because their necessary tools can only be installed in this environment.
-However, the transition is not necessarily easy. To encourage you in
-investing the patience and time to make this transition, and actually enjoy
-it, we will first start with a basic introduction to GNU/Linux operating
-systems. Afterwards, in @ref{Command-line interface} we'll discuss the
-wonderful benefits of the command-line interface, how it beautifully
-complements the graphic user interface, and why it is worth the (apparently
-steep) learning curve. Finally a complete chapter (@ref{Tutorials}) is
-devoted to real world scenarios of using Gnuastro (on the
-command-line). Therefore if you don't yet feel comfortable with the
-command-line we strongly recommend going through that chapter after
-finishing this section.
-
-You might have already noticed that we are not using the name ``Linux'',
-but ``GNU/Linux''. Please take the time to have a look at the following
-essays and FAQs for a complete understanding of this very important
-distinction.
+Some astronomers initially install and use a GNU/Linux operating system 
because their necessary tools can only be installed in this environment.
+However, the transition is not necessarily easy.
+To encourage you in investing the patience and time to make this transition, 
and actually enjoy it, we will first start with a basic introduction to 
GNU/Linux operating systems.
+Afterwards, in @ref{Command-line interface} we'll discuss the wonderful 
benefits of the command-line interface, how it beautifully complements the 
graphic user interface, and why it is worth the (apparently steep) learning 
curve.
+Finally a complete chapter (@ref{Tutorials}) is devoted to real world 
scenarios of using Gnuastro (on the command-line).
+Therefore if you don't yet feel comfortable with the command-line we strongly 
recommend going through that chapter after finishing this section.
+
+You might have already noticed that we are not using the name ``Linux'', but 
``GNU/Linux''.
+Please take the time to have a look at the following essays and FAQs for a 
complete understanding of this very important distinction.
 
 @itemize
 
@@ -1333,20 +1098,12 @@ distinction.
 @cindex GNU/Linux
 @cindex GNU C library
 @cindex GNU Compiler Collection
-In short, the Linux kernel@footnote{In Unix-like operating systems, the
-kernel connects software and hardware worlds.} is built using the GNU C
-library (glibc) and GNU compiler collection (gcc). The Linux kernel
-software alone is just a means for other software to access the hardware
-resources, it is useless alone: to say “running Linux”, is like saying
-“driving your carburetor”.
-
-To have an operating system, you need lower-level (to build the kernel),
-and higher-level (to use it) software packages. The majority of such
-software in most Unix-like operating systems are GNU software: ``the whole
-system is basically GNU with Linux loaded''. Therefore to acknowledge GNU's
-instrumental role in the creation and usage of the Linux kernel and the
-operating systems that use it, we should call these operating systems
-``GNU/Linux''.
+In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel 
connects software and hardware worlds.} is built using the GNU C library 
(glibc) and GNU compiler collection (gcc).
+The Linux kernel software alone is just a means for other software to access 
the hardware resources, it is useless alone: to say ``running Linux'', is like 
saying ``driving your carburetor''.
+
+To have an operating system, you need lower-level (to build the kernel), and 
higher-level (to use it) software packages.
+The majority of such software in most Unix-like operating systems are GNU 
software: ``the whole system is basically GNU with Linux loaded''.
+Therefore to acknowledge GNU's instrumental role in the creation and usage of 
the Linux kernel and the operating systems that use it, we should call these 
operating systems ``GNU/Linux''.
 
 
 @menu
@@ -1360,113 +1117,67 @@ operating systems that use it, we should call these 
operating systems
 @cindex Command-line user interface
 @cindex GUI: graphic user interface
 @cindex CLI: command-line user interface
-One aspect of Gnuastro that might be a little troubling to new GNU/Linux
-users is that (at least for the time being) it only has a command-line user
-interface (CLI). This might be contrary to the mostly graphical user
-interface (GUI) experience with proprietary operating systems. Since the
-various actions available aren't always on the screen, the command-line
-interface can be complicated, intimidating, and frustrating for a
-first-time user. This is understandable and also experienced by anyone who
-started using the computer (from childhood) in a graphical user interface
-(this includes most of Gnuastro's authors). Here we hope to convince you of
-the unique benefits of this interface which can greatly enhance your
-productivity while complementing your GUI experience.
+One aspect of Gnuastro that might be a little troubling to new GNU/Linux users 
is that (at least for the time being) it only has a command-line user interface 
(CLI).
+This might be contrary to the mostly graphical user interface (GUI) experience 
with proprietary operating systems.
+Since the various actions available aren't always on the screen, the 
command-line interface can be complicated, intimidating, and frustrating for a 
first-time user.
+This is understandable and also experienced by anyone who started using the 
computer (from childhood) in a graphical user interface (this includes most of 
Gnuastro's authors).
+Here we hope to convince you of the unique benefits of this interface which 
can greatly enhance your productivity while complementing your GUI experience.
 
 @cindex GNOME 3
-Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based
-operating systems now have an advanced and useful GUI. Since the GUI was
-created long after the command-line, some wrongly consider the command line
-to be obsolete. Both interfaces are useful for different tasks. For example
-you can't view an image, video, pdf document or web page on the
-command-line. On the other hand you can't reproduce your results easily in
-the GUI. Therefore they should not be regarded as rivals but as
-complementary user interfaces, here we will outline how the CLI can be
-useful in scientific programs.
-
-You can think of the GUI as a veneer over the CLI to facilitate a small
-subset of all the possible CLI operations. Each click you do on the GUI,
-can be thought of as internally running a different CLI command. So
-asymptotically (if a good designer can design a GUI which is able to show
-you all the possibilities to click on) the GUI is only as powerful as the
-command-line. In practice, such graphical designers are very hard to find
-for every program, so the GUI operations are always a subset of the
-internal CLI commands. For programs that are only made for the GUI, this
-results in not including lots of potentially useful operations. It also
-results in `interface design' to be a crucially important part of any GUI
-program. Scientists don't usually have enough resources to hire a graphical
-designer, also the complexity of the GUI code is far more than CLI code,
-which is harmful for a scientific software, see @ref{Science and its
-tools}.
+Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based 
operating systems now have an advanced and useful GUI.
+Since the GUI was created long after the command-line, some wrongly consider 
the command line to be obsolete.
+Both interfaces are useful for different tasks.
+For example you can't view an image, video, pdf document or web page on the 
command-line.
+On the other hand you can't reproduce your results easily in the GUI.
+Therefore they should not be regarded as rivals but as complementary user 
interfaces, here we will outline how the CLI can be useful in scientific 
programs.
+
+You can think of the GUI as a veneer over the CLI to facilitate a small subset 
of all the possible CLI operations.
+Each click you do on the GUI, can be thought of as internally running a 
different CLI command.
+So asymptotically (if a good designer can design a GUI which is able to show 
you all the possibilities to click on) the GUI is only as powerful as the 
command-line.
+In practice, such graphical designers are very hard to find for every program, 
so the GUI operations are always a subset of the internal CLI commands.
+For programs that are only made for the GUI, this results in not including 
lots of potentially useful operations.
+It also results in `interface design' to be a crucially important part of any 
GUI program.
+Scientists don't usually have enough resources to hire a graphical designer, 
also the complexity of the GUI code is far more than CLI code, which is harmful 
for a scientific software, see @ref{Science and its tools}.
 
 @cindex GUI: repeating operations
-For programs that have a GUI, one action on the GUI (moving and clicking a
-mouse, or tapping a touchscreen) might be more efficient and easier than
-its CLI counterpart (typing the program name and your desired
-configuration). However, if you have to repeat that same action more than
-once, the GUI will soon become frustrating and prone to errors. Unless the
-designers of a particular program decided to design such a system for a
-particular GUI action, there is no general way to run any possible series
-of actions automatically on the GUI.
+For programs that have a GUI, one action on the GUI (moving and clicking a 
mouse, or tapping a touchscreen) might be more efficient and easier than its 
CLI counterpart (typing the program name and your desired configuration).
+However, if you have to repeat that same action more than once, the GUI will 
soon become frustrating and prone to errors.
+Unless the designers of a particular program decided to design such a system 
for a particular GUI action, there is no general way to run any possible series 
of actions automatically on the GUI.
 
 @cindex GNU Bash
 @cindex Reproducible results
 @cindex CLI: repeating operations
-On the command-line, you can run any series of of actions which can come
-from various CLI capable programs you have decided your self in any
-possible permutation with one command@footnote{By writing a shell script
-and running it, for example see the tutorials in @ref{Tutorials}.}. This
-allows for much more creativity and exact reproducibility that is not
-possible to a GUI user. For technical and scientific operations, where the
-same operation (using various programs) has to be done on a large set of
-data files, this is crucially important. It also allows exact
-reproducibility which is a foundation principle for scientific results. The
-most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash,
-we strongly encourage you to put aside several hours and go through this
-beautifully explained web page:
-@url{https://flossmanuals.net/command-line/}. You don't need to read or
-even fully understand the whole thing, only a general knowledge of the
-first few chapters are enough to get you going.
-
-Since the operations in the GUI are limited and they are visible, reading a
-manual is not that important in the GUI (most programs don't even have
-any!). However, to give you the creative power explained above, with a CLI
-program, it is best if you first read the manual of any program you are
-using. You don't need to memorize any details, only an understanding of the
-generalities is needed. Once you start working, there are more easier ways
-to remember a particular option or operation detail, see @ref{Getting
-help}.
+On the command-line, you can run any series of of actions which can come from 
various CLI capable programs you have decided your self in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example see the tutorials in @ref{Tutorials}.}.
+This allows for much more creativity and exact reproducibility that is not 
possible to a GUI user.
+For technical and scientific operations, where the same operation (using 
various programs) has to be done on a large set of data files, this is 
crucially important.
+It also allows exact reproducibility which is a foundation principle for 
scientific results.
+The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash, 
we strongly encourage you to put aside several hours and go through this 
beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
+You don't need to read or even fully understand the whole thing, only a 
general knowledge of the first few chapters are enough to get you going.
+
+Since the operations in the GUI are limited and they are visible, reading a 
manual is not that important in the GUI (most programs don't even have any!).
+However, to give you the creative power explained above, with a CLI program, 
it is best if you first read the manual of any program you are using.
+You don't need to memorize any details, only an understanding of the 
generalities is needed.
+Once you start working, there are more easier ways to remember a particular 
option or operation detail, see @ref{Getting help}.
 
 @cindex GNU Emacs
 @cindex Virtual console
-To experience the command-line in its full glory and not in the GUI
-terminal emulator, press the following keys together:
-@key{CTRL+ALT+F4}@footnote{Instead of @key{F4}, you can use any of the keys
-from @key{F1} to @key{F6} for different virtual consoles depending on your
-GNU/Linux distribution, try them all out. You can also run a separate GUI
-from within this console if you want to.} to access the virtual console. To
-return back to your GUI, press the same keys above replacing @key{F4} with
-@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux
-distribution). In the virtual console, the GUI, with all its distracting
-colors and information, is gone. Enabling you to focus entirely on your
-actual work.
+To experience the command-line in its full glory and not in the GUI terminal 
emulator, press the following keys together: @key{CTRL+ALT+F4}@footnote{Instead 
of @key{F4}, you can use any of the keys from @key{F1} to @key{F6} for 
different virtual consoles depending on your GNU/Linux distribution, try them 
all out.
+You can also run a separate GUI from within this console if you want to.} to 
access the virtual console.
+To return back to your GUI, press the same keys above replacing @key{F4} with 
@key{F7} (or @key{F1}, or @key{F2}, depending on your GNU/Linux distribution).
+In the virtual console, the GUI, with all its distracting colors and 
information, is gone.
+Enabling you to focus entirely on your actual work.
 
 @cindex Resource heavy operations
-For operations that use a lot of your system's resources (processing a
-large number of large astronomical images for example), the virtual
-console is the place to run them. This is because the GUI is not
-competing with your research work for your system's RAM and CPU. Since
-the virtual consoles are completely independent, you can even log out
-of your GUI environment to give even more of your hardware resources
-to the programs you are running and thus reduce the operating time.
+For operations that use a lot of your system's resources (processing a large 
number of large astronomical images for example), the virtual console is the 
place to run them.
+This is because the GUI is not competing with your research work for your 
system's RAM and CPU.
+Since the virtual consoles are completely independent, you can even log out of 
your GUI environment to give even more of your hardware resources to the 
programs you are running and thus reduce the operating time.
 
 @cindex Secure shell
 @cindex SSH
 @cindex Remote operation
-Since it uses far less system resources, the CLI is also convenient for
-remote access to your computer. Using secure shell (SSH) you can log in
-securely to your system (similar to the virtual console) from anywhere even
-if the connection speeds are low. There are apps for smart phones and
-tablets which allow you to do this.
+Since it uses far less system resources, the CLI is also convenient for remote 
access to your computer.
+Using secure shell (SSH) you can log in securely to your system (similar to 
the virtual console) from anywhere even if the connection speeds are low.
+There are apps for smart phones and tablets which allow you to do this.
 
 
 
@@ -1484,79 +1195,48 @@ tablets which allow you to do this.
 @cindex Halted program
 @cindex Program crashing
 @cindex Inconsistent results
-According to Wikipedia ``a software bug is an error, flaw, failure, or
-fault in a computer program or system that causes it to produce an
-incorrect or unexpected result, or to behave in unintended ways''. So when
-you see that a program is crashing, not reading your input correctly,
-giving the wrong results, or not writing your output correctly, you have
-found a bug. In such cases, it is best if you report the bug to the
-developers. The programs will also inform you if known impossible
-situations occur (which are caused by something unexpected) and will ask
-the users to report the bug issue.
+According to Wikipedia ``a software bug is an error, flaw, failure, or fault 
in a computer program or system that causes it to produce an incorrect or 
unexpected result, or to behave in unintended ways''.
+So when you see that a program is crashing, not reading your input correctly, 
giving the wrong results, or not writing your output correctly, you have found 
a bug.
+In such cases, it is best if you report the bug to the developers.
+The programs will also inform you if known impossible situations occur (which 
are caused by something unexpected) and will ask the users to report the bug 
issue.
 
 @cindex Bug reporting
-Prior to actually filing a bug report, it is best to search previous
-reports. The issue might have already been found and even solved. The best
-place to check if your bug has already been discussed is the bugs tracker
-on @ref{Gnuastro project webpage} at
-@url{https://savannah.gnu.org/bugs/?group=gnuastro}. In the top search
-fields (under ``Display Criteria'') set the ``Open/Closed'' drop-down menu
-to ``Any'' and choose the respective program or general category of the bug
-in ``Category'' and click the ``Apply'' button. The results colored green
-have already been solved and the status of those colored in red is shown in
-the table.
+Prior to actually filing a bug report, it is best to search previous reports.
+The issue might have already been found and even solved.
+The best place to check if your bug has already been discussed is the bugs 
tracker on @ref{Gnuastro project webpage} at 
@url{https://savannah.gnu.org/bugs/?group=gnuastro}.
+In the top search fields (under ``Display Criteria'') set the ``Open/Closed'' 
drop-down menu to ``Any'' and choose the respective program or general category 
of the bug in ``Category'' and click the ``Apply'' button.
+The results colored green have already been solved and the status of those 
colored in red is shown in the table.
 
 @cindex Version control
-Recently corrected bugs are probably not yet publicly released because
-they are scheduled for the next Gnuastro stable release. If the bug is
-solved but not yet released and it is an urgent issue for you, you can
-get the version controlled source and compile that, see @ref{Version
-controlled source}.
-
-To solve the issue as readily as possible, please follow the following to
-guidelines in your bug report. The
-@url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report
-Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html,
-How To Ask Questions The Smart Way} essays also provide some good generic
-advice for all software (don't contact their authors for Gnuastro's
-problems). Mastering the art of giving good bug reports (like asking good
-questions) can greatly enhance your experience with any free and open
-source software. So investing the time to read through these essays will
-greatly reduce your frustration after you see something doesn't work the
-way you feel it is supposed to for a large range of software, not just
-Gnuastro.
+Recently corrected bugs are probably not yet publicly released because they 
are scheduled for the next Gnuastro stable release.
+If the bug is solved but not yet released and it is an urgent issue for you, 
you can get the version controlled source and compile that, see @ref{Version 
controlled source}.
+
+To solve the issue as readily as possible, please follow the following to 
guidelines in your bug report.
+The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report 
Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How 
To Ask Questions The Smart Way} essays also provide some good generic advice 
for all software (don't contact their authors for Gnuastro's problems).
+Mastering the art of giving good bug reports (like asking good questions) can 
greatly enhance your experience with any free and open source software.
+So investing the time to read through these essays will greatly reduce your 
frustration after you see something doesn't work the way you feel it is 
supposed to for a large range of software, not just Gnuastro.
 
 @table @strong
 
 @item Be descriptive
-Please provide as many details as possible and be very descriptive. Explain
-what you expected and what the output was: it might be that your
-expectation was wrong. Also please clearly state which sections of the
-Gnuastro book (this book), or other references you have studied to
-understand the problem. This can be useful in correcting the book (adding
-links to likely places where users will check). But more importantly, it
-will be encouraging for the developers, since you are showing how serious
-you are about the problem and that you have actually put some thought into
-it. ``To be able to ask a question clearly is two-thirds of the way to
-getting it answered.'' -- John Ruskin (1819-1900).
+Please provide as many details as possible and be very descriptive.
+Explain what you expected and what the output was: it might be that your 
expectation was wrong.
+Also please clearly state which sections of the Gnuastro book (this book), or 
other references you have studied to understand the problem.
+This can be useful in correcting the book (adding links to likely places where 
users will check).
+But more importantly, it will be encouraging for the developers, since you are 
showing how serious you are about the problem and that you have actually put 
some thought into it.
+``To be able to ask a question clearly is two-thirds of the way to getting it 
answered.'' -- John Ruskin (1819-1900).
 
 @item Individual and independent bug reports
-If you have found multiple bugs, please send them as separate (and
-independent) bugs (as much as possible). This will significantly help
-us in managing and resolving them sooner.
+If you have found multiple bugs, please send them as separate (and 
independent) bugs (as much as possible).
+This will significantly help us in managing and resolving them sooner.
 
 @cindex Reproducible bug reports
 @item Reproducible bug reports
-If we cannot exactly reproduce your bug, then it is very hard to resolve
-it. So please send us a Minimal working
-example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}}
-along with the description. For example in running a program, please send
-us the full command-line text and the output with the @option{-P} option,
-see @ref{Operating mode options}. If it is caused only for a certain input,
-also send us that input file. In case the input FITS is large, please use
-Crop to only crop the problematic section and make it as small as possible
-so it can easily be uploaded and downloaded and not waste the archive's
-storage, see @ref{Crop}.
+If we cannot exactly reproduce your bug, then it is very hard to resolve it.
+So please send us a Minimal working 
example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}} 
along with the description.
+For example in running a program, please send us the full command-line text 
and the output with the @option{-P} option, see @ref{Operating mode options}.
+If it is caused only for a certain input, also send us that input file.
+In case the input FITS is large, please use Crop to only crop the problematic 
section and make it as small as possible so it can easily be uploaded and 
downloaded and not waste the archive's storage, see @ref{Crop}.
 @end table
 
 @noindent
@@ -1567,33 +1247,28 @@ There are generally two ways to inform us of bugs:
 @cindex Mailing list: bug-gnuastro
 @cindex @code{bug-gnuastro@@gnu.org}
 @item
-Send a mail to @code{bug-gnuastro@@gnu.org}. Any mail you send to this
-address will be distributed through the bug-gnuastro mailing
-list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}. This
-is the simplest way to send us bug reports. The developers will then
-register the bug into the project webpage (next choice) for you.
+Send a mail to @code{bug-gnuastro@@gnu.org}.
+Any mail you send to this address will be distributed through the bug-gnuastro 
mailing 
list@footnote{@url{https://lists.gnu.org/mailman/listinfo/bug-gnuastro}}.
+This is the simplest way to send us bug reports.
+The developers will then register the bug into the project webpage (next 
choice) for you.
 
 @cindex Gnuastro project page
 @cindex Support request manager
 @cindex Submit new tracker item
 @cindex Anonymous bug submission
 @item
-Use the Gnuastro project webpage at
-@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways
-to get to the submission page as listed below. Fill in the form as
-described below and submit it (see @ref{Gnuastro project webpage} for
-more on the project webpage).
+Use the Gnuastro project webpage at 
@url{https://savannah.gnu.org/projects/gnuastro/}: There are two ways to get to 
the submission page as listed below.
+Fill in the form as described below and submit it (see @ref{Gnuastro project 
webpage} for more on the project webpage).
 
 @itemize
 
 @item
-Using the top horizontal menu items, immediately under the top page
-title. Hovering your mouse on ``Support'' will open a drop-down
-list. Select ``Submit new''.
+Using the top horizontal menu items, immediately under the top page title.
+Hovering your mouse on ``Support'' will open a drop-down list.
+Select ``Submit new''.
 
 @item
-In the main body of the page, under the ``Communication tools''
-section, click on ``Submit new item''.
+In the main body of the page, under the ``Communication tools'' section, click 
on ``Submit new item''.
 
 @end itemize
 @end itemize
@@ -1602,15 +1277,10 @@ section, click on ``Submit new item''.
 @cindex Bug tracker
 @cindex Task tracker
 @cindex Viewing trackers
-Once the items have been registered in the mailing list or webpage,
-the developers will add it to either the ``Bug Tracker'' or ``Task
-Manager'' trackers of the Gnuastro project webpage. These two trackers
-can only be edited by the Gnuastro project developers, but they can be
-browsed by anyone, so you can follow the progress on your bug. You are
-most welcome to join us in developing Gnuastro and fixing the bug you
-have found maybe a good starting point. Gnuastro is designed to be
-easy for anyone to develop (see @ref{Science and its tools}) and there
-is a full chapter devoted to developing it: @ref{Developing}.
+Once the items have been registered in the mailing list or webpage, the 
developers will add it to either the ``Bug Tracker'' or ``Task Manager'' 
trackers of the Gnuastro project webpage.
+These two trackers can only be edited by the Gnuastro project developers, but 
they can be browsed by anyone, so you can follow the progress on your bug.
+You are most welcome to join us in developing Gnuastro and fixing the bug you 
have found maybe a good starting point.
+Gnuastro is designed to be easy for anyone to develop (see @ref{Science and 
its tools}) and there is a full chapter devoted to developing it: 
@ref{Developing}.
 
 
 @node Suggest new feature, Announcements, Report a bug, Introduction
@@ -1618,52 +1288,30 @@ is a full chapter devoted to developing it: 
@ref{Developing}.
 
 @cindex Feature requests
 @cindex Additions to Gnuastro
-We would always be happy to hear of suggested new features. For every
-program there are already lists of features that we are planning to
-add. You can see the current list of plans from the Gnuastro project
-webpage at @url{https://savannah.gnu.org/projects/gnuastro/} and following
-@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the
-top of the page immediately under the title, see @ref{Gnuastro project
-webpage}. If you want to request a feature to an existing program, click on
-the ``Display Criteria'' above the list and under ``Category'', choose that
-particular program. Under ``Category'' you can also see the existing
-suggestions for new programs or other cases like installation,
-documentation or libraries. Also be sure to set the ``Open/Closed'' value
-to ``Any''.
-
-If the feature you want to suggest is not already listed in the task
-manager, then follow the steps that are fully described in @ref{Report a
-bug}. Please have in mind that the developers are all busy with their own
-astronomical research, and implementing existing ``task''s to add or
-resolving bugs. Gnuastro is a volunteer effort and none of the developers
-are paid for their hard work. So, although we will try our best, please
-don't not expect that your suggested feature be immediately included (with
-the next release of Gnuastro).
-
-The best person to apply the exciting new feature you have in mind is
-you, since you have the motivation and need. In fact Gnuastro is
-designed for making it as easy as possible for you to hack into it
-(add new features, change existing ones and so on), see @ref{Science
-and its tools}. Please have a look at the chapter devoted to
-developing (@ref{Developing}) and start applying your desired
-feature. Once you have added it, you can use it for your own work and
-if you feel you want others to benefit from your work, you can request
-for it to become part of Gnuastro. You can then join the developers
-and start maintaining your own part of Gnuastro. If you choose to take
-this path of action please contact us before hand (@ref{Report a bug})
-so we can avoid possible duplicate activities and get interested
-people in contact.
+We would always be happy to hear of suggested new features.
+For every program there are already lists of features that we are planning to 
add.
+You can see the current list of plans from the Gnuastro project webpage at 
@url{https://savannah.gnu.org/projects/gnuastro/} and following 
@clicksequence{``Tasks''@click{}``Browse''} on the horizontal menu at the top 
of the page immediately under the title, see @ref{Gnuastro project webpage}.
+If you want to request a feature to an existing program, click on the 
``Display Criteria'' above the list and under ``Category'', choose that 
particular program.
+Under ``Category'' you can also see the existing suggestions for new programs 
or other cases like installation, documentation or libraries.
+Also be sure to set the ``Open/Closed'' value to ``Any''.
+
+If the feature you want to suggest is not already listed in the task manager, 
then follow the steps that are fully described in @ref{Report a bug}.
+Please have in mind that the developers are all busy with their own 
astronomical research, and implementing existing ``task''s to add or resolving 
bugs.
+Gnuastro is a volunteer effort and none of the developers are paid for their 
hard work.
+So, although we will try our best, please don't not expect that your suggested 
feature be immediately included (with the next release of Gnuastro).
+
+The best person to apply the exciting new feature you have in mind is you, 
since you have the motivation and need.
+In fact Gnuastro is designed for making it as easy as possible for you to hack 
into it (add new features, change existing ones and so on), see @ref{Science 
and its tools}.
+Please have a look at the chapter devoted to developing (@ref{Developing}) and 
start applying your desired feature.
+Once you have added it, you can use it for your own work and if you feel you 
want others to benefit from your work, you can request for it to become part of 
Gnuastro.
+You can then join the developers and start maintaining your own part of 
Gnuastro.
+If you choose to take this path of action please contact us before hand 
(@ref{Report a bug}) so we can avoid possible duplicate activities and get 
interested people in contact.
 
 @cartouche
 @noindent
-@strong{Gnuastro is a collection of low level programs:} As described in
-@ref{Program design philosophy}, a founding principle of Gnuastro is that
-each library or program should be basic and low-level. High level jobs
-should be done by running the separate programs or using separate functions
-in succession through a shell script or calling the libraries by higher
-level functions, see the examples in @ref{Tutorials}. So when making the
-suggestions please consider how your desired job can best be broken into
-separate steps and modularized.
+@strong{Gnuastro is a collection of low level programs:} As described in 
@ref{Program design philosophy}, a founding principle of Gnuastro is that each 
library or program should be basic and low-level.
+High level jobs should be done by running the separate programs or using 
separate functions in succession through a shell script or calling the 
libraries by higher level functions, see the examples in @ref{Tutorials}.
+So when making the suggestions please consider how your desired job can best 
be broken into separate steps and modularized.
 @end cartouche
 
 
@@ -1673,19 +1321,15 @@ separate steps and modularized.
 
 @cindex Announcements
 @cindex Mailing list: info-gnuastro
-Gnuastro has a dedicated mailing list for making announcements
-(@code{info-gnuastro}). Anyone can subscribe to this mailing list. Anytime
-there is a new stable or test release, an email will be circulated
-there. The email contains a summary of the overall changes along with a
-detailed list (from the @file{NEWS} file). This mailing list is thus the
-best way to stay up to date with new releases, easily learn about the
-updated/new features, or dependencies (see @ref{Dependencies}).
+Gnuastro has a dedicated mailing list for making announcements 
(@code{info-gnuastro}).
+Anyone can subscribe to this mailing list.
+Anytime there is a new stable or test release, an email will be circulated 
there.
+The email contains a summary of the overall changes along with a detailed list 
(from the @file{NEWS} file).
+This mailing list is thus the best way to stay up to date with new releases, 
easily learn about the updated/new features, or dependencies (see 
@ref{Dependencies}).
 
-To subscribe to this list, please visit
-@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}. Traffic (number
-of mails per unit time) in this list is designed to be low: only a handful
-of mails per year. Previous announcements are available on
-@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
+To subscribe to this list, please visit 
@url{https://lists.gnu.org/mailman/listinfo/info-gnuastro}.
+Traffic (number of mails per unit time) in this list is designed to be low: 
only a handful of mails per year.
+Previous announcements are available on 
@url{http://lists.gnu.org/archive/html/info-gnuastro/, its archive}.
 
 
 
@@ -1697,29 +1341,19 @@ In this book we have the following conventions:
 @itemize
 
 @item
-All commands that are to be run on the shell (command-line) prompt as the
-user start with a @command{$}. In case they must be run as a super-user or
-system administrator, they will start with a single @command{#}. If the
-command is in a separate line and next line @code{is also in the code type
-face}, but doesn't have any of the @command{$} or @command{#} signs, then
-it is the output of the command after it is run. As a user, you don't need
-to type those lines. A line that starts with @command{##} is just a comment
-for explaining the command to a human reader and must not be typed.
+All commands that are to be run on the shell (command-line) prompt as the user 
start with a @command{$}.
+In case they must be run as a super-user or system administrator, they will 
start with a single @command{#}.
+If the command is in a separate line and next line @code{is also in the code 
type face}, but doesn't have any of the @command{$} or @command{#} signs, then 
it is the output of the command after it is run.
+As a user, you don't need to type those lines.
+A line that starts with @command{##} is just a comment for explaining the 
command to a human reader and must not be typed.
 
 @item
-If the command becomes larger than the page width a @key{\} is
-inserted in the code. If you are typing the code by hand on the
-command-line, you don't need to use multiple lines or add the extra
-space characters, so you can omit them. If you want to copy and paste
-these examples (highly discouraged!) then the @key{\} should stay.
-
-The @key{\} character is a shell escape character which is used
-commonly to make characters which have special meaning for the shell
-loose that special place (the shell will not treat them specially if
-there is a @key{\} behind them). When it is a last character in a line
-(the next character is a new-line character) the new-line character
-looses its meaning an the shell sees it as a simple white-space
-character, enabling you to use multiple lines to write your commands.
+If the command becomes larger than the page width a @key{\} is inserted in the 
code.
+If you are typing the code by hand on the command-line, you don't need to use 
multiple lines or add the extra space characters, so you can omit them.
+If you want to copy and paste these examples (highly discouraged!) then the 
@key{\} should stay.
+
+The @key{\} character is a shell escape character which is used commonly to 
make characters which have special meaning for the shell loose that special 
place (the shell will not treat them specially if there is a @key{\} behind 
them).
+When it is a last character in a line (the next character is a new-line 
character) the new-line character looses its meaning an the shell sees it as a 
simple white-space character, enabling you to use multiple lines to write your 
commands.
 
 @end itemize
 
@@ -1728,77 +1362,98 @@ character, enabling you to use multiple lines to write 
your commands.
 @node Acknowledgments,  , Conventions, Introduction
 @section Acknowledgments
 
-Gnuastro would not have been possible without scholarships and grants from
-several funding institutions. We thus ask that if you used Gnuastro in any
-of your papers/reports, please add the proper citation and acknowledge the
-funding agencies/projects. For details of which papers to cite (may be
-different for different programs) and get the acknowledgment statement to
-include in your paper, please run the relevant programs with the common
-@option{--cite} option like the example commands below (for more on
-@option{--cite}, please see @ref{Operating mode options}).
+Gnuastro would not have been possible without scholarships and grants from 
several funding institutions.
+We thus ask that if you used Gnuastro in any of your papers/reports, please 
add the proper citation and acknowledge the funding agencies/projects.
+For details of which papers to cite (may be different for different programs) 
and get the acknowledgment statement to include in your paper, please run the 
relevant programs with the common @option{--cite} option like the example 
commands below (for more on @option{--cite}, please see @ref{Operating mode 
options}).
 
 @example
 $ astnoisechisel --cite
 $ astmkcatalog --cite
 @end example
 
-Here, we'll acknowledge all the institutions (and their grants) along with
-the people who helped make Gnuastro possible. The full list of Gnuastro
-authors is available at the start of this book and the @file{AUTHORS} file
-in the source code (both are generated automatically from the version
-controlled history). The plain text file @file{THANKS}, which is also
-distributed along with the source code, contains the list of people and
-institutions who played an indirect role in Gnuastro (not committed any
-code in the Gnuastro version controlled history).
-
-The Japanese Ministry of Education, Culture, Sports, Science, and
-Technology (MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD
-degree in Tohoku University Astronomical Institute had an instrumental role
-in the long term learning and planning that made the idea of Gnuastro
-possible. The very critical view points of Professor Takashi Ichikawa
-(Mohammad's adviser) were also instrumental in the initial ideas and
-creation of Gnuastro. Afterwards, the European Research Council (ERC)
-advanced grant 339659-MUSICOS (Principal investigator: Roland Bacon) was
-vital in the growth and expansion of Gnuastro. Working with Roland at the
-Centre de Recherche Astrophysique de Lyon (CRAL), enabled a thorough
-re-write of the core functionality of all libraries and programs, turning
-Gnuastro into the large collection of generic programs and libraries it is
-today.  Work on improving Gnuastro and making it mature is now continuing
-primarily in the Instituto de Astrofisica de Canarias (IAC) and in
-particular in collaboration with Johan Knapen and Ignacio Trujillo.
+Here, we'll acknowledge all the institutions (and their grants) along with the 
people who helped make Gnuastro possible.
+The full list of Gnuastro authors is available at the start of this book and 
the @file{AUTHORS} file in the source code (both are generated automatically 
from the version controlled history).
+The plain text file @file{THANKS}, which is also distributed along with the 
source code, contains the list of people and institutions who played an 
indirect role in Gnuastro (not committed any code in the Gnuastro version 
controlled history).
+
+The Japanese Ministry of Education, Culture, Sports, Science, and Technology 
(MEXT) scholarship for Mohammad Akhlaghi's Masters and PhD degree in Tohoku 
University Astronomical Institute had an instrumental role in the long term 
learning and planning that made the idea of Gnuastro possible.
+The very critical view points of Professor Takashi Ichikawa (Mohammad's 
adviser) were also instrumental in the initial ideas and creation of Gnuastro.
+Afterwards, the European Research Council (ERC) advanced grant 339659-MUSICOS 
(Principal investigator: Roland Bacon) was vital in the growth and expansion of 
Gnuastro.
+Working with Roland at the Centre de Recherche Astrophysique de Lyon (CRAL), 
enabled a thorough re-write of the core functionality of all libraries and 
programs, turning Gnuastro into the large collection of generic programs and 
libraries it is today.
+Work on improving Gnuastro and making it mature is now continuing primarily in 
the Instituto de Astrofisica de Canarias (IAC) and in particular in 
collaboration with Johan Knapen and Ignacio Trujillo.
 
 @c To the developers: please keep this in the same order as the THANKS file
 @c (alphabetical, except for the names in the paragraph above).
-In general, we would like to gratefully thank the following people for
-their useful and constructive comments and suggestions (in alphabetical
-order by family name): Valentina Abril-melgarejo, Marjan Akbari, Hamed
-Altafi, Roland Bacon, Roberto Baena Gall\'e, Zahra Bagheri, Karl Berry,
-Leindert Boogaard, Nicolas Bouch@'e, Fernando Buitrago, Adrian Bunk, Rosa
-Calvi, Nushkia Chamba, Benjamin Clement, Nima Dehdilani, Antonio Diaz Diaz,
-Pierre-Alain Duc, Elham Eftekhari, Gaspar Galaz, Th@'er@`ese Godefroy,
-Madusha Gunawardhana, Bruno Haible, Stephen Hamer, Takashi Ichikawa, Ra@'ul
-Infante Sainz, Brandon Invergo, Oryna Ivashtenko, Aur@'elien Jarno, Lee
-Kelvin, Brandon Kelly, Mohammad-Reza Khellat, Johan Knapen, Geoffry
-Krouchi, Floriane Leclercq, Alan Lefor, Guillaume Mahler, Juan Molina
-Tobar, Francesco Montanari, Dmitrii Oparin, Bertrand Pain, William Pence,
-Mamta Pommier, Bob Proulx, Teymoor Saifollahi, Elham Saremi, Yahya
-Sefidbakht, Alejandro Serrano Borlaff, Zahra Sharbaf, Jenny Sorce, Lee
-Spitler, Richard Stallman, Michael Stein, Ole Streicher, Alfred M. Szmidt,
-Michel Tallon, Juan C. Tello, @'Eric Thi@'ebaut, Ignacio Trujillo, David
-Valls-Gabaud, Aaron Watkins, Michael H.F. Wilkinson, Christopher Willmer,
-Sara Yousefi Taemeh, Johannes Zabl. The GNU French Translation Team is also
-managing the French version of the top Gnuastro webpage which we highly
-appreciate. Finally we should thank all the (sometimes anonymous) people in
-various online forums which patiently answered all our small (but
-important) technical questions.
-
-All work on Gnuastro has been voluntary, but the authors are most grateful
-to the following institutions (in chronological order) for hosting us in
-our research. Where necessary, these institutions have disclaimed any
-ownership of the parts of Gnuastro that were developed there, thus insuring
-the freedom of Gnuastro for the future (see @ref{Copyright assignment}). We
-highly appreciate their support for free software, and thus free science,
-and therefore a free society.
+In general, we would like to gratefully thank the following people for their 
useful and constructive comments and suggestions (in alphabetical order by 
family name):
+Valentina Abril-melgarejo,
+Marjan Akbari,
+Hamed Altafi,
+Roland Bacon,
+Roberto Baena Gall\'e,
+Zahra Bagheri,
+Karl Berry,
+Leindert Boogaard,
+Nicolas Bouch@'e,
+Fernando Buitrago,
+Adrian Bunk,
+Rosa Calvi,
+Nushkia Chamba,
+Benjamin Clement,
+Nima Dehdilani,
+Antonio Diaz Diaz,
+Pierre-Alain Duc,
+Elham Eftekhari,
+Gaspar Galaz,
+Th@'er@`ese Godefroy,
+Madusha Gunawardhana,
+Bruno Haible,
+Stephen Hamer,
+Takashi Ichikawa,
+Ra@'ul Infante Sainz,
+Brandon Invergo,
+Oryna Ivashtenko,
+Aur@'elien Jarno,
+Lee Kelvin,
+Brandon Kelly,
+Mohammad-Reza Khellat,
+Johan Knapen,
+Geoffry Krouchi,
+Floriane Leclercq,
+Alan Lefor,
+Guillaume Mahler,
+Juan Molina Tobar,
+Francesco Montanari,
+Dmitrii Oparin,
+Bertrand Pain,
+William Pence,
+Mamta Pommier,
+Bob Proulx,
+Teymoor Saifollahi,
+Elham Saremi,
+Yahya Sefidbakht,
+Alejandro Serrano Borlaff,
+Zahra Sharbaf,
+Jenny Sorce,
+Lee Spitler,
+Richard Stallman,
+Michael Stein,
+Ole Streicher,
+Alfred M. Szmidt,
+Michel Tallon,
+Juan C. Tello,
+@'Eric Thi@'ebaut,
+Ignacio Trujillo,
+David Valls-Gabaud,
+Aaron Watkins,
+Michael H.F. Wilkinson,
+Christopher Willmer,
+Sara Yousefi Taemeh,
+Johannes Zabl.
+The GNU French Translation Team is also managing the French version of the top 
Gnuastro webpage which we highly appreciate.
+Finally we should thank all the (sometimes anonymous) people in various online 
forums which patiently answered all our small (but imporant) technical 
questions.
+
+All work on Gnuastro has been voluntary, but the authors are most grateful to 
the following institutions (in chronological order) for hosting us in our 
research.
+Where necessary, these institutions have disclaimed any ownership of the parts 
of Gnuastro that were developed there, thus insuring the freedom of Gnuastro 
for the future (see @ref{Copyright assignment}).
+We highly appreciate their support for free software, and thus free science, 
and therefore a free society.
 
 @quotation
 Tohoku University Astronomical Institute, Sendai, Japan.@*
@@ -1824,79 +1479,41 @@ Instituto de Astrofisica de Canarias (IAC), Tenerife, 
Spain.@*
 
 @cindex Tutorial
 @cindex Cookbook
-To help new users have a smooth and easy start with Gnuastro, in this
-chapter several thoroughly elaborated tutorials, or cookbooks, are
-provided. These tutorials demonstrate the capabilities of different
-Gnuastro programs and libraries, along with tips and guidelines for the
-best practices of using them in various realistic situations.
-
-We strongly recommend going through these tutorials to get a good feeling
-of how the programs are related (built in a modular design to be used
-together in a pipeline), very similar to the core Unix-based programs that
-they were modeled on. Therefore these tutorials will greatly help in
-optimally using Gnuastro's programs (and generally, the Unix-like
-command-line environment) effectively for your research.
-
-In @ref{Sufi simulates a detection}, we'll start with a
-fictional@footnote{The two historically motivated tutorials (@ref{Sufi
-simulates a detection} is not intended to be a historical reference (the
-historical facts of this fictional tutorial used Wikipedia as a
-reference). This form of presenting a tutorial was influenced by the
-PGF/TikZ and Beamer manuals. They are both packages in in @TeX{} and
-@LaTeX{}, the first is a high-level vector graphic programming environment,
-while with the second you can make presentation slides. On a similar topic,
-there are also some nice words of wisdom for Unix-like systems called
-@url{http://catb.org/esr/writings/unix-koans, Rootless Root}. These also
-have a similar style but they use a mythical figure named Master Foo. If
-you already have some experience in Unix-like systems, you will definitely
-find these Unix Koans entertaining/educative.} tutorial explaining how Abd
-al-rahman Sufi (903 -- 986 A.D., the first recorded description of
-``nebulous'' objects in the heavens is attributed to him) could have used
-some of Gnuastro's programs for a realistic simulation of his observations
-and see if his detection of nebulous objects was trust-able. Because all
-conditions are under control in a simulated/mock environment/dataset, mock
-datasets can be a valuable tool to inspect the limitations of your data
-analysis and processing. But they need to be as realistic as possible, so
-the first tutorial is dedicated to this important step of an analysis.
-
-The next two tutorials (@ref{General program usage tutorial} and
-@ref{Detecting large extended targets}) use real input datasets from some
-of the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky
-Survey (SDSS) respectively. Their aim is to demonstrate some real-world
-problems that many astronomers often face and how they can be be solved
-with Gnuastro's programs.
-
-The ultimate aim of @ref{General program usage tutorial} is to detect
-galaxies in a deep HST image, measure their positions and brightness and
-select those with the strongest colors. In the process, it takes many
-detours to introduce you to the useful capabilities of many of the
-programs. So please be patient in reading it. If you don't have much time
-and can only try one of the tutorials, we recommend this one.
+To help new users have a smooth and easy start with Gnuastro, in this chapter 
several thoroughly elaborated tutorials, or cookbooks, are provided.
+These tutorials demonstrate the capabilities of different Gnuastro programs 
and libraries, along with tips and guidelines for the best practices of using 
them in various realistic situations.
+
+We strongly recommend going through these tutorials to get a good feeling of 
how the programs are related (built in a modular design to be used together in 
a pipeline), very similar to the core Unix-based programs that they were 
modeled on.
+Therefore these tutorials will greatly help in optimally using Gnuastro's 
programs (and generally, the Unix-like command-line environment) effectively 
for your research.
+
+In @ref{Sufi simulates a detection}, we'll start with a fictional@footnote{The 
two historically motivated tutorials (@ref{Sufi simulates a detection} is not 
intended to be a historical reference (the historical facts of this fictional 
tutorial used Wikipedia as a reference).
+This form of presenting a tutorial was influenced by the PGF/TikZ and Beamer 
manuals.
+They are both packages in in @TeX{} and @LaTeX{}, the first is a high-level 
vector graphic programming environment, while with the second you can make 
presentation slides.
+On a similar topic, there are also some nice words of wisdom for Unix-like 
systems called @url{http://catb.org/esr/writings/unix-koans, Rootless Root}.
+These also have a similar style but they use a mythical figure named Master 
Foo.
+If you already have some experience in Unix-like systems, you will definitely 
find these Unix Koans entertaining/educative.} tutorial explaining how Abd 
al-rahman Sufi (903 -- 986 A.D., the first recorded description of ``nebulous'' 
objects in the heavens is attributed to him) could have used some of Gnuastro's 
programs for a realistic simulation of his observations and see if his 
detection of nebulous objects was trust-able.
+Because all conditions are under control in a simulated/mock 
environment/dataset, mock datasets can be a valuable tool to inspect the 
limitations of your data analysis and processing.
+But they need to be as realistic as possible, so the first tutorial is 
dedicated to this important step of an analysis.
+
+The next two tutorials (@ref{General program usage tutorial} and 
@ref{Detecting large extended targets}) use real input datasets from some of 
the deep Hubble Space Telescope (HST) images and the Sloan Digital Sky Survey 
(SDSS) respectively.
+Their aim is to demonstrate some real-world problems that many astronomers 
often face and how they can be be solved with Gnuastro's programs.
+
+The ultimate aim of @ref{General program usage tutorial} is to detect galaxies 
in a deep HST image, measure their positions and brightness and select those 
with the strongest colors.
+In the process, it takes many detours to introduce you to the useful 
capabilities of many of the programs.
+So please be patient in reading it.
+If you don't have much time and can only try one of the tutorials, we 
recommend this one.
 
 @cindex PSF
 @cindex Point spread function
-@ref{Detecting large extended targets} deals with a major problem in
-astronomy: effectively detecting the faint outer wings of bright (and
-large) nearby galaxies to extremely low surface brightness levels (roughly
-one quarter of the local noise level in the example discussed). Besides the
-interesting scientific questions in these low-surface brightness features,
-failure to properly detect them will bias the measurements of the
-background objects and the survey's noise estimates. This is an important
-issue, especially in wide surveys. Because bright/large galaxies and
-stars@footnote{Stars also have similarly large and extended wings due to
-the point spread function, see @ref{PSF}.}, cover a significant fraction of
-the survey area.
-
-In these tutorials, we have intentionally avoided too many cross references
-to make it more easy to read. For more information about a particular
-program, you can visit the section with the same name as the program in
-this book. Each program section in the subsequent chapters starts by
-explaining the general concepts behind what it does, for example see
-@ref{Convolve}. If you only want practical information on running a
-program, for example its options/configuration, input(s) and output(s),
-please consult the subsection titled ``Invoking ProgramName'', for example
-see @ref{Invoking astnoisechisel}. For an explanation of the conventions we
-use in the example codes through the book, please see @ref{Conventions}.
+@ref{Detecting large extended targets} deals with a major problem in 
astronomy: effectively detecting the faint outer wings of bright (and large) 
nearby galaxies to extremely low surface brightness levels (roughly one quarter 
of the local noise level in the example discussed).
+Besides the interesting scientific questions in these low-surface brightness 
features, failure to properly detect them will bias the measurements of the 
background objects and the survey's noise estimates.
+This is an important issue, especially in wide surveys.
+Because bright/large galaxies and stars@footnote{Stars also have similarly 
large and extended wings due to the point spread function, see @ref{PSF}.}, 
cover a significant fraction of the survey area.
+
+In these tutorials, we have intentionally avoided too many cross references to 
make it more easy to read.
+For more information about a particular program, you can visit the section 
with the same name as the program in this book.
+Each program section in the subsequent chapters starts by explaining the 
general concepts behind what it does, for example see @ref{Convolve}.
+If you only want practical information on running a program, for example its 
options/configuration, input(s) and output(s), please consult the subsection 
titled ``Invoking ProgramName'', for example see @ref{Invoking astnoisechisel}.
+For an explanation of the conventions we use in the example codes through the 
book, please see @ref{Conventions}.
 
 @menu
 * Sufi simulates a detection::  Simulating a detection.
@@ -1911,82 +1528,56 @@ use in the example codes through the book, please see 
@ref{Conventions}.
 @cindex Azophi
 @cindex Abd al-rahman Sufi
 @cindex Sufi, Abd al-rahman
-It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986
-A.D.)@footnote{In Latin Sufi is known as Azophi. He was an Iranian
-astronomer. His manuscript ``Book of fixed stars'' contains the first
-recorded observations of the Andromeda galaxy, the Large Magellanic Cloud
-and seven other non-stellar or `nebulous' objects.}  is in Shiraz as a
-guest astronomer. He had come there to use the advanced 123 centimeter
-astrolabe for his studies on the Ecliptic. However, something was bothering
-him for a long time. While mapping the constellations, there were several
-non-stellar objects that he had detected in the sky, one of them was in the
-Andromeda constellation. During a trip he had to Yemen, Sufi had seen
-another such object in the southern skies looking over the Indian ocean. He
-wasn't sure if such cloud-like non-stellar objects (which he was the first
-to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical
-objects or if they were only the result of some bias in his
-observations. Could such diffuse objects actually be detected at all with
-his detection technique?
+It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In 
Latin Sufi is known as Azophi.
+He was an Iranian astronomer.
+His manuscript ``Book of fixed stars'' contains the first recorded 
observations of the Andromeda galaxy, the Large Magellanic Cloud and seven 
other non-stellar or `nebulous' objects.}  is in Shiraz as a guest astronomer.
+He had come there to use the advanced 123 centimeter astrolabe for his studies 
on the Ecliptic.
+However, something was bothering him for a long time.
+While mapping the constellations, there were several non-stellar objects that 
he had detected in the sky, one of them was in the Andromeda constellation.
+During a trip he had to Yemen, Sufi had seen another such object in the 
southern skies looking over the Indian ocean.
+He wasn't sure if such cloud-like non-stellar objects (which he was the first 
to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or 
if they were only the result of some bias in his observations.
+Could such diffuse objects actually be detected at all with his detection 
technique?
 
 @cindex Almagest
 @cindex Claudius Ptolemy
 @cindex Ptolemy, Claudius
-He still had a few hours left until nightfall (when he would continue
-his studies on the ecliptic) so he decided to find an answer to this
-question. He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D)
-Almagest and had made lots of corrections to it, in particular in
-measuring the brightness. Using his same experience, he was able to
-measure a magnitude for the objects and wanted to simulate his
-observation to see if a simulated object with the same brightness and
-size could be detected in a simulated noise with the same detection
-technique.  The general outline of the steps he wants to take are:
+He still had a few hours left until nightfall (when he would continue his 
studies on the ecliptic) so he decided to find an answer to this question.
+He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had 
made lots of corrections to it, in particular in measuring the brightness.
+Using his same experience, he was able to measure a magnitude for the objects 
and wanted to simulate his observation to see if a simulated object with the 
same brightness and size could be detected in a simulated noise with the same 
detection technique.
+The general outline of the steps he wants to take are:
 
 @enumerate
 
 @item
-Make some mock profiles in an over-sampled image. The initial mock
-image has to be over-sampled prior to convolution or other forms of
-transformation in the image. Through his experiences, Sufi knew that
-this is because the image of heavenly bodies is actually transformed
-by the atmosphere or other sources outside the atmosphere (for example
-gravitational lenses) prior to being sampled on an image. Since that
-transformation occurs on a continuous grid, to best approximate it, he
-should do all the work on a finer pixel grid. In the end he can
-re-sample the result to the initially desired grid size.
+Make some mock profiles in an over-sampled image.
+The initial mock image has to be over-sampled prior to convolution or other 
forms of transformation in the image.
+Through his experiences, Sufi knew that this is because the image of heavenly 
bodies is actually transformed by the atmosphere or other sources outside the 
atmosphere (for example gravitational lenses) prior to being sampled on an 
image.
+Since that transformation occurs on a continuous grid, to best approximate it, 
he should do all the work on a finer pixel grid.
+In the end he can re-sample the result to the initially desired grid size.
 
 @item
 @cindex PSF
-Convolve the image with a point spread function (PSF, see @ref{PSF}) that
-is over-sampled to the same resolution as the mock image. Since he wants to
-finish in a reasonable time and the PSF kernel will be very large due to
-oversampling, he has to use frequency domain convolution which has the side
-effect of dimming the edges of the image. So in the first step above he
-also has to build the image to be larger by at least half the width of the
-PSF convolution kernel on each edge.
+Convolve the image with a point spread function (PSF, see @ref{PSF}) that is 
over-sampled to the same resolution as the mock image.
+Since he wants to finish in a reasonable time and the PSF kernel will be very 
large due to oversampling, he has to use frequency domain convolution which has 
the side effect of dimming the edges of the image.
+So in the first step above he also has to build the image to be larger by at 
least half the width of the PSF convolution kernel on each edge.
 
 @item
-With all the transformations complete, the image should be re-sampled
-to the same size of the pixels in his detector.
+With all the transformations complete, the image should be re-sampled to the 
same size of the pixels in his detector.
 
 @item
-He should remove those extra pixels on all edges to remove frequency
-domain convolution artifacts in the final product.
+He should remove those extra pixels on all edges to remove frequency domain 
convolution artifacts in the final product.
 
 @item
-He should add noise to the (until now, noise-less) mock image. After
-all, all observations have noise associated with them.
+He should add noise to the (until now, noise-less) mock image.
+After all, all observations have noise associated with them.
 
 @end enumerate
 
-Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in
-Isfahan (where he worked) and had installed it on his computer a year
-before. It had tools to do all the steps above. He had used MakeProfiles
-before, but wasn't sure which columns he had chosen in his user or system
-wide configuration files for which parameters, see @ref{Configuration
-files}. So to start his simulation, Sufi runs MakeProfiles with the
-@option{-P} option to make sure what columns in a catalog MakeProfiles
-currently recognizes and the output image parameters. In particular, Sufi
-is interested in the recognized columns (shown below).
+Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in 
Isfahan (where he worked) and had installed it on his computer a year before.
+It had tools to do all the steps above.
+He had used MakeProfiles before, but wasn't sure which columns he had chosen 
in his user or system wide configuration files for which parameters, see 
@ref{Configuration files}.
+So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option 
to make sure what columns in a catalog MakeProfiles currently recognizes and 
the output image parameters.
+In particular, Sufi is interested in the recognized columns (shown below).
 
 @example
 $ astmkprof -P
@@ -2017,46 +1608,23 @@ $ astmkprof -P
 @end example
 
 @noindent
-In Gnuastro, column counting starts from 1, so the columns are ordered such
-that the first column (number 1) can be an ID he specifies for each object
-(and MakeProfiles ignores), each subsequent column is used for another
-property of the profile. It is also possible to use column names for the
-values of these options and change these defaults, but Sufi preferred to
-stick to the defaults. Fortunately MakeProfiles has the capability to also
-make the PSF which is to be used on the mock image and using the
-@option{--prepforconv} option, he can also make the mock image to be larger
-by the correct amount and all the sources to be shifted by the correct
-amount.
-
-For his initial check he decides to simulate the nebula in the Andromeda
-constellation. The night he was observing, the PSF had roughly a FWHM of
-about 5 pixels, so as the first row (profile), he defines the PSF
-parameters and sets the radius column (@code{rcol} above, fifth column) to
-@code{5.000}, he also chooses a Moffat function for its functional
-form. Remembering how diffuse the nebula in the Andromeda constellation
-was, he decides to simulate it with a mock S@'{e}rsic index 1.0 profile. He
-wants the output to be 499 pixels by 499 pixels, so he can put the center
-of the mock profile in the central pixel of the image (note that an even
-number doesn't have a central element).
-
-Looking at his drawings of it, he decides a reasonable effective radius for
-it would be 40 pixels on this image pixel scale, he sets the axis ratio and
-position angle to approximately correct values too and finally he sets the
-total magnitude of the profile to 3.44 which he had accurately
-measured. Sufi also decides to truncate both the mock profile and PSF at 5
-times the respective radius parameters. In the end he decides to put four
-stars on the four corners of the image at very low magnitudes as a visual
-scale.
-
-Using all the information above, he creates the catalog of mock profiles he
-wants in a file named @file{cat.txt} (short for catalog) using his favorite
-text editor and stores it in a directory named @file{simulationtest} in his
-home directory. [The @command{cat} command prints the contents of a file,
-short for ``concatenation''. So please copy-paste the lines after
-``@command{cat cat.txt}'' into @file{cat.txt} when the editor opens in the
-steps above it, note that there are 7 lines, first one starting with
-@key{#}. Also be careful when copying from the PDF format, the Info, web,
-or text formats shouldn't have any problem]:
+In Gnuastro, column counting starts from 1, so the columns are ordered such 
that the first column (number 1) can be an ID he specifies for each object (and 
MakeProfiles ignores), each subsequent column is used for another property of 
the profile.
+It is also possible to use column names for the values of these options and 
change these defaults, but Sufi preferred to stick to the defaults.
+Fortunately MakeProfiles has the capability to also make the PSF which is to 
be used on the mock image and using the @option{--prepforconv} option, he can 
also make the mock image to be larger by the correct amount and all the sources 
to be shifted by the correct amount.
+
+For his initial check he decides to simulate the nebula in the Andromeda 
constellation.
+The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so 
as the first row (profile), he defines the PSF parameters and sets the radius 
column (@code{rcol} above, fifth column) to @code{5.000}, he also chooses a 
Moffat function for its functional form.
+Remembering how diffuse the nebula in the Andromeda constellation was, he 
decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
+He wants the output to be 499 pixels by 499 pixels, so he can put the center 
of the mock profile in the central pixel of the image (note that an even number 
doesn't have a central element).
+
+Looking at his drawings of it, he decides a reasonable effective radius for it 
would be 40 pixels on this image pixel scale, he sets the axis ratio and 
position angle to approximately correct values too and finally he sets the 
total magnitude of the profile to 3.44 which he had accurately measured.
+Sufi also decides to truncate both the mock profile and PSF at 5 times the 
respective radius parameters.
+In the end he decides to put four stars on the four corners of the image at 
very low magnitudes as a visual scale.
+
+Using all the information above, he creates the catalog of mock profiles he 
wants in a file named @file{cat.txt} (short for catalog) using his favorite 
text editor and stores it in a directory named @file{simulationtest} in his 
home directory.
+[The @command{cat} command prints the contents of a file, short for 
``concatenation''.
+So please copy-paste the lines after ``@command{cat cat.txt}'' into 
@file{cat.txt} when the editor opens in the steps above it, note that there are 
7 lines, first one starting with @key{#}.
+Also be careful when copying from the PDF format, the Info, web, or text 
formats shouldn't have any problem]:
 
 @example
 $ mkdir ~/simulationtest
@@ -2077,8 +1645,8 @@ $ cat cat.txt
 @end example
 
 @noindent
-The zero-point magnitude for his observation was 18. Now he has all the
-necessary parameters and runs MakeProfiles with the following command:
+The zero-point magnitude for his observation was 18.
+Now he has all the necessary parameters and runs MakeProfiles with the 
following command:
 
 @example
 
@@ -2103,28 +1671,21 @@ $ls
 
 @cindex Oversample
 @noindent
-The file @file{0_cat.fits} is the PSF Sufi had asked for and
-@file{cat.fits} is the image containing the other 5 objects. The PSF is now
-available to him as a separate file for the convolution step. While he was
-preparing the catalog, one of his students approached him and was also
-following the steps. When he opened the image, the student was surprised to
-see that all the stars are only one pixel and not in the shape of the PSF
-as we see when we image the sky at night. So Sufi explained to him that the
-stars will take the shape of the PSF after convolution and this is how they
-would look if we didn't have an atmosphere or an aperture when we took the
-image. The size of the image was also surprising for the student, instead
-of 499 by 499, it was 2615 by 2615 pixels (from the command below):
+The file @file{0_cat.fits} is the PSF Sufi had asked for and @file{cat.fits} 
is the image containing the other 5 objects.
+The PSF is now available to him as a separate file for the convolution step.
+While he was preparing the catalog, one of his students approached him and was 
also following the steps.
+When he opened the image, the student was surprised to see that all the stars 
are only one pixel and not in the shape of the PSF as we see when we image the 
sky at night.
+So Sufi explained to him that the stars will take the shape of the PSF after 
convolution and this is how they would look if we didn't have an atmosphere or 
an aperture when we took the image.
+The size of the image was also surprising for the student, instead of 499 by 
499, it was 2615 by 2615 pixels (from the command below):
 
 @example
 $ astfits cat.fits -h1 | grep NAXIS
 @end example
 
 @noindent
-So Sufi explained why oversampling is important for parts of the image
-where the flux change is significant over a pixel. Sufi then explained to
-him that after convolving we will re-sample the image to get our originally
-desired size/resolution. To convolve the image, Sufi ran the following
-command:
+So Sufi explained why oversampling is important for parts of the image where 
the flux change is significant over a pixel.
+Sufi then explained to him that after convolving we will re-sample the image 
to get our originally desired size/resolution.
+To convolve the image, Sufi ran the following command:
 
 @example
 $ astconvolve --kernel=0_cat.fits cat.fits
@@ -2145,14 +1706,9 @@ $ls
 @end example
 
 @noindent
-When convolution finished, Sufi opened the @file{cat_convolved.fits} file
-and showed the effect of convolution to his student and explained to him
-how a PSF with a larger FWHM would make the points even wider. With the
-convolved image ready, they were prepared to re-sample it to the original
-pixel scale Sufi had planned [from the @command{$ astmkprof -P} command
-above, recall that MakeProfiles had over-sampled the image by 5
-times]. Sufi explained the basic concepts of warping the image to his
-student and ran Warp with the following command:
+When convolution finished, Sufi opened the @file{cat_convolved.fits} file and 
showed the effect of convolution to his student and explained to him how a PSF 
with a larger FWHM would make the points even wider.
+With the convolved image ready, they were prepared to re-sample it to the 
original pixel scale Sufi had planned [from the @command{$ astmkprof -P} 
command above, recall that MakeProfiles had over-sampled the image by 5 times].
+Sufi explained the basic concepts of warping the image to his student and ran 
Warp with the following command:
 
 @example
 $ astwarp --scale=1/5 --centeroncorner cat_convolved.fits
@@ -2175,10 +1731,9 @@ NAXIS2  =                  523 / length of data axis 2
 @end example
 
 @noindent
-@file{cat_convolved_scaled.fits} now has the correct pixel scale. However,
-the image is still larger than what we had wanted, it is 523
-(@mymath{499+12+12}) by 523 pixels. The student is slightly confused, so
-Sufi also re-samples the PSF with the same scale by running
+@file{cat_convolved_scaled.fits} now has the correct pixel scale.
+However, the image is still larger than what we had wanted, it is 523 
(@mymath{499+12+12}) by 523 pixels.
+The student is slightly confused, so Sufi also re-samples the PSF with the 
same scale by running
 
 @example
 $ astwarp --scale=1/5 --centeroncorner 0_cat.fits
@@ -2189,13 +1744,9 @@ NAXIS2  =                   25 / length of data axis 2
 @end example
 
 @noindent
-Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how
-frequency space convolution will dim the edges and that is why he added the
-@option{--prepforconv} option to MakeProfiles, see @ref{If convolving
-afterwards}. Now that convolution is done, Sufi can remove those extra
-pixels using Crop with the command below. Crop's @option{--section} option
-accepts coordinates inclusively and counting from 1 (according to the FITS
-standard), so the crop region's first pixel has to be 13, not 12.
+Sufi notes that @mymath{25=(2\times12)+1} and goes on to explain how frequency 
space convolution will dim the edges and that is why he added the 
@option{--prepforconv} option to MakeProfiles, see @ref{If convolving 
afterwards}.
+Now that convolution is done, Sufi can remove those extra pixels using Crop 
with the command below.
+Crop's @option{--section} option accepts coordinates inclusively and counting 
from 1 (according to the FITS standard), so the crop region's first pixel has 
to be 13, not 12.
 
 @example
 $ astcrop cat_convolved_scaled.fits --section=13:*-12,13:*-12    \
@@ -2211,19 +1762,13 @@ cat_convolved.fits  cat_convolved_scaled.fits          
cat.txt
 @end example
 
 @noindent
-Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499}
-pixels and the mock Andromeda galaxy is centered on the central pixel (open
-the image in a FITS viewer and confirm this by zooming into the center,
-note that an even-width image wouldn't have a central pixel). This is the
-same dimensions as Sufi had desired in the beginning. All this trouble was
-certainly worth it because now there is no dimming on the edges of the
-image and the profile centers are more accurately sampled.
+Finally, @file{cat_convolved_scaled_cropped.fits} is @mymath{499\times499} 
pixels and the mock Andromeda galaxy is centered on the central pixel (open the 
image in a FITS viewer and confirm this by zooming into the center, note that 
an even-width image wouldn't have a central pixel).
+This is the same dimensions as Sufi had desired in the beginning.
+All this trouble was certainly worth it because now there is no dimming on the 
edges of the image and the profile centers are more accurately sampled.
 
-The final step to simulate a real observation would be to add noise to the
-image. Sufi set the zeropoint magnitude to the same value that he set when
-making the mock profiles and looking again at his observation log, he had
-measured the background flux near the nebula had a magnitude of 7 that
-night. So using these values he ran MakeNoise:
+The final step to simulate a real observation would be to add noise to the 
image.
+Sufi set the zeropoint magnitude to the same value that he set when making the 
mock profiles and looking again at his observation log, he had measured the 
background flux near the nebula had a magnitude of 7 that night.
+So using these values he ran MakeNoise:
 
 @example
 $ astmknoise --zeropoint=18 --background=7 --output=out.fits    \
@@ -2239,26 +1784,17 @@ cat_convolved.fits cat_convolved_scaled.fits         
cat.txt
 @end example
 
 @noindent
-The @file{out.fits} file now contains the noised image of the mock catalog
-Sufi had asked for. Seeing how the @option{--output} option allows the user
-to specify the name of the output file, the student was confused and wanted
-to know why Sufi hadn't used it before? Sufi then explained to him that for
-intermediate steps it is best to rely on the automatic output, see
-@ref{Automatic output}. Doing so will give all the intermediate files the
-same basic name structure, so in the end you can simply remove them all
-with the Shell's capabilities. So Sufi decided to show this to the student
-by making a shell script from the commands he had used before.
+The @file{out.fits} file now contains the noised image of the mock catalog 
Sufi had asked for.
+Seeing how the @option{--output} option allows the user to specify the name of 
the output file, the student was confused and wanted to know why Sufi hadn't 
used it before? Sufi then explained to him that for intermediate steps it is 
best to rely on the automatic output, see @ref{Automatic output}.
+Doing so will give all the intermediate files the same basic name structure, 
so in the end you can simply remove them all with the Shell's capabilities.
+So Sufi decided to show this to the student by making a shell script from the 
commands he had used before.
 
-The command-line shell has the capability to read all the separate input
-commands from a file. This is useful when you want to do the same thing
-multiple times, with only the names of the files or minor parameters
-changing between the different instances. Using the shell's history (by
-pressing the up keyboard key) Sufi reviewed all the commands and then he
-retrieved the last 5 commands with the @command{$ history 5} command. He
-selected all those lines he had input and put them in a text file named
-@file{mymock.sh}. Then he defined the @code{edge} and @code{base} shell
-variables for easier customization later. Finally, before every command, he
-added some comments (lines starting with @key{#}) for future readability.
+The command-line shell has the capability to read all the separate input 
commands from a file.
+This is useful when you want to do the same thing multiple times, with only 
the names of the files or minor parameters changing between the different 
instances.
+Using the shell's history (by pressing the up keyboard key) Sufi reviewed all 
the commands and then he retrieved the last 5 commands with the @command{$ 
history 5} command.
+He selected all those lines he had input and put them in a text file named 
@file{mymock.sh}.
+Then he defined the @code{edge} and @code{base} shell variables for easier 
customization later.
+Finally, before every command, he added some comments (lines starting with 
@key{#}) for future readability.
 
 @example
 edge=12
@@ -2297,31 +1833,19 @@ rm 0*.fits "$base"*.fits
 @end example
 
 @cindex Comments
-He used this chance to remind the student of the importance of comments in
-code or shell scripts: when writing the code, you have a good mental
-picture of what you are doing, so writing comments might seem superfluous
-and excessive. However, in one month when you want to re-use the script,
-you have lost that mental picture and remembering it can be time-consuming
-and frustrating. The importance of comments is further amplified when you
-want to share the script with a friend/colleague. So it is good to
-accompany any script/code with useful comments while you are writing it
-(create a good mental picture of what/why you are doing something).
+He used this chance to remind the student of the importance of comments in 
code or shell scripts: when writing the code, you have a good mental picture of 
what you are doing, so writing comments might seem superfluous and excessive.
+However, in one month when you want to re-use the script, you have lost that 
mental picture and remembering it can be time-consuming and frustrating.
+The importance of comments is further amplified when you want to share the 
script with a friend/colleague.
+So it is good to accompany any script/code with useful comments while you are 
writing it (create a good mental picture of what/why you are doing something).
 
 @cindex Gedit
 @cindex GNU Emacs
-Sufi then explained to the eager student that you define a variable by
-giving it a name, followed by an @code{=} sign and the value you
-want. Then you can reference that variable from anywhere in the script
-by calling its name with a @code{$} prefix. So in the script whenever
-you see @code{$base}, the value we defined for it above is used. If
-you use advanced editors like GNU Emacs or even simpler ones like
-Gedit (part of the GNOME graphical user interface) the variables will
-become a different color which can really help in understanding the
-script. We have put all the @code{$base} variables in double quotation
-marks (@code{"}) so the variable name and the following text do not
-get mixed, the shell is going to ignore the @code{"} after replacing
-the variable value. To make the script executable, Sufi ran the
-following command:
+Sufi then explained to the eager student that you define a variable by giving 
it a name, followed by an @code{=} sign and the value you want.
+Then you can reference that variable from anywhere in the script by calling 
its name with a @code{$} prefix.
+So in the script whenever you see @code{$base}, the value we defined for it 
above is used.
+If you use advanced editors like GNU Emacs or even simpler ones like Gedit 
(part of the GNOME graphical user interface) the variables will become a 
different color which can really help in understanding the script.
+We have put all the @code{$base} variables in double quotation marks 
(@code{"}) so the variable name and the following text do not get mixed, the 
shell is going to ignore the @code{"} after replacing the variable value.
+To make the script executable, Sufi ran the following command:
 
 @example
 $ chmod +x mymock.sh
@@ -2334,40 +1858,21 @@ Then finally, Sufi ran the script, simply by calling 
its file name:
 $ ./mymock.sh
 @end example
 
-After the script finished, the only file remaining is the @file{out.fits}
-file that Sufi had wanted in the beginning.  Sufi then explained to the
-student how he could run this script anywhere that he has a catalog if the
-script is in the same directory. The only thing the student had to modify
-in the script was the name of the catalog (the value of the @code{base}
-variable in the start of the script) and the value to the @code{edge}
-variable if he changed the PSF size. The student was also happy to hear
-that he won't need to make it executable again when he makes changes later,
-it will remain executable unless he explicitly changes the executable flag
-with @command{chmod}.
-
-The student was really excited, since now, through simple shell
-scripting, he could really speed up his work and run any command in
-any fashion he likes allowing him to be much more creative in his
-works. Until now he was using the graphical user interface which
-doesn't have such a facility and doing repetitive things on it was
-really frustrating and some times he would make mistakes. So he left
-to go and try scripting on his own computer.
-
-Sufi could now get back to his own work and see if the simulated
-nebula which resembled the one in the Andromeda constellation could be
-detected or not. Although it was extremely faint@footnote{The
-brightness of a diffuse object is added over all its pixels to give
-its final magnitude, see @ref{Flux Brightness and magnitude}. So
-although the magnitude 3.44 (of the mock nebula) is orders of
-magnitude brighter than 6 (of the stars), the central galaxy is much
-fainter. Put another way, the brightness is distributed over a large
-area in the case of a nebula.}, fortunately it passed his detection
-tests and he wrote it in the draft manuscript that would later become
-``Book of fixed stars''. He still had to check the other nebula he saw
-from Yemen and several other such objects, but they could wait until
-tomorrow (thanks to the shell script, he only has to define a new
-catalog). It was nearly sunset and they had to begin preparing for the
-night's measurements on the ecliptic.
+After the script finished, the only file remaining is the @file{out.fits} file 
that Sufi had wanted in the beginning.
+Sufi then explained to the student how he could run this script anywhere that 
he has a catalog if the script is in the same directory.
+The only thing the student had to modify in the script was the name of the 
catalog (the value of the @code{base} variable in the start of the script) and 
the value to the @code{edge} variable if he changed the PSF size.
+The student was also happy to hear that he won't need to make it executable 
again when he makes changes later, it will remain executable unless he 
explicitly changes the executable flag with @command{chmod}.
+
+The student was really excited, since now, through simple shell scripting, he 
could really speed up his work and run any command in any fashion he likes 
allowing him to be much more creative in his works.
+Until now he was using the graphical user interface which doesn't have such a 
facility and doing repetitive things on it was really frustrating and some 
times he would make mistakes.
+So he left to go and try scripting on his own computer.
+
+Sufi could now get back to his own work and see if the simulated nebula which 
resembled the one in the Andromeda constellation could be detected or not.
+Although it was extremely faint@footnote{The brightness of a diffuse object is 
added over all its pixels to give its final magnitude, see @ref{Flux Brightness 
and magnitude}.
+So although the magnitude 3.44 (of the mock nebula) is orders of magnitude 
brighter than 6 (of the stars), the central galaxy is much fainter.
+Put another way, the brightness is distributed over a large area in the case 
of a nebula.}, fortunately it passed his detection tests and he wrote it in the 
draft manuscript that would later become ``Book of fixed stars''.
+He still had to check the other nebula he saw from Yemen and several other 
such objects, but they could wait until tomorrow (thanks to the shell script, 
he only has to define a new catalog).
+It was nearly sunset and they had to begin preparing for the night's 
measurements on the ecliptic.
 
 
 @menu
@@ -2379,51 +1884,29 @@ night's measurements on the ecliptic.
 
 @cindex Hubble Space Telescope (HST)
 @cindex Colors, broad-band photometry
-Measuring colors of astronomical objects in broad-band or narrow-band
-images is one of the most basic and common steps in astronomical
-analysis. Here, we will use Gnuastro's programs to get a physical scale
-(area at certain redshifts) of the field we are studying, detect objects in
-a Hubble Space Telescope (HST) image, measure their colors and identify the
-ones with the strongest colors, do a visual inspection of these objects and
-inspect spatial position in the image. After this tutorial, you can also
-try the @ref{Detecting large extended targets} tutorial which goes into a
-little more detail on detecting very low surface brightness signal.
-
-During the tutorial, we will take many detours to explain, and practically
-demonstrate, the many capabilities of Gnuastro's programs. In the end you
-will see that the things you learned during this tutorial are much more
-generic than this particular problem and can be used in solving a wide
-variety of problems involving the analysis of data (images or tables). So
-please don't rush, and go through the steps patiently to optimally master
-Gnuastro.
+Measuring colors of astronomical objects in broad-band or narrow-band images 
is one of the most basic and common steps in astronomical analysis.
+Here, we will use Gnuastro's programs to get a physical scale (area at certain 
redshifts) of the field we are studying, detect objects in a Hubble Space 
Telescope (HST) image, measure their colors and identify the ones with the 
strongest colors, do a visual inspection of these objects and inspect spatial 
position in the image.
+After this tutorial, you can also try the @ref{Detecting large extended 
targets} tutorial which goes into a little more detail on detecting very low 
surface brightness signal.
+
+During the tutorial, we will take many detours to explain, and practically 
demonstrate, the many capabilities of Gnuastro's programs.
+In the end you will see that the things you learned during this tutorial are 
much more generic than this particular problem and can be used in solving a 
wide variety of problems involving the analysis of data (images or tables).
+So please don't rush, and go through the steps patiently to optimally master 
Gnuastro.
 
 @cindex XDF survey
 @cindex eXtreme Deep Field (XDF) survey
-In this tutorial, we'll use the HST
-@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field}
-dataset. Like almost all astronomical surveys, this dataset is free for
-download and usable by the public. You will need the following tools in
-this tutorial: Gnuastro, SAO DS9 @footnote{See @ref{SAO ds9}, available at
-@url{http://ds9.si.edu/site/Home.html}}, GNU
-Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most
-common implementation is GNU
-AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
-
-This tutorial was first prepared for the ``Exploring the Ultra-Low Surface
-Brightness Universe'' workshop (November 2017) at the ISSI in Bern,
-Switzerland. It was further extended in the ``4th Indo-French Astronomy
-School'' (July 2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA
-in Lyon, France. We are very grateful to the organizers of these workshops
-and the attendees for the very fruitful discussions and suggestions that
-made this tutorial possible.
+In this tutorial, we'll use the HST@url{https://archive.stsci.edu/prepds/xdf, 
eXtreme Deep Field} dataset.
+Like almost all astronomical surveys, this dataset is free for download and 
usable by the public.
+You will need the following tools in this tutorial: Gnuastro, SAO DS9 
@footnote{See @ref{SAO ds9}, available at 
@url{http://ds9.si.edu/site/Home.html}}, GNU 
Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common 
implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
+
+This tutorial was first prepared for the ``Exploring the Ultra-Low Surface 
Brightness Universe'' workshop (November 2017) at the ISSI in Bern, Switzerland.
+It was further extended in the ``4th Indo-French Astronomy School'' (July 
2018) organized by LIO, CRAL CNRS UMR5574, UCBL, and IUCAA in Lyon, France.
+We are very grateful to the organizers of these workshops and the attendees 
for the very fruitful discussions and suggestions that made this tutorial 
possible.
 
 @cartouche
 @noindent
-@strong{Write the example commands manually:} Try to type the example
-commands on your terminal manually and use the history feature of your
-command-line (by pressing the ``up'' button to retrieve previous
-commands). Don't simply copy and paste the commands shown here. This will
-help simulate future situations when you are processing your own datasets.
+@strong{Write the example commands manually:} Try to type the example commands 
on your terminal manually and use the history feature of your command-line (by 
pressing the ``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
 
@@ -2449,47 +1932,33 @@ help simulate future situations when you are processing 
your own datasets.
 
 @node Calling Gnuastro's programs, Accessing documentation, General program 
usage tutorial, General program usage tutorial
 @subsection Calling Gnuastro's programs
-A handy feature of Gnuastro is that all program names start with
-@code{ast}. This will allow your command-line processor to easily list and
-auto-complete Gnuastro's programs for you.  Try typing the following
-command (press @key{TAB} key when you see @code{<TAB>}) to see the list:
+A handy feature of Gnuastro is that all program names start with @code{ast}.
+This will allow your command-line processor to easily list and auto-complete 
Gnuastro's programs for you.
+Try typing the following command (press @key{TAB} key when you see 
@code{<TAB>}) to see the list:
 
 @example
 $ ast<TAB><TAB>
 @end example
 
 @noindent
-Any program that starts with @code{ast} (including all Gnuastro programs)
-will be shown. By choosing the subsequent characters of your desired
-program and pressing @key{<TAB><TAB>} again, the list will narrow down and
-the program name will auto-complete once your input characters are
-unambiguous. In short, you often don't need to type the full name of the
-program you want to run.
+Any program that starts with @code{ast} (including all Gnuastro programs) will 
be shown.
+By choosing the subsequent characters of your desired program and pressing 
@key{<TAB><TAB>} again, the list will narrow down and the program name will 
auto-complete once your input characters are unambiguous.
+In short, you often don't need to type the full name of the program you want 
to run.
 
 @node Accessing documentation, Setup and data download, Calling Gnuastro's 
programs, General program usage tutorial
 @subsection Accessing documentation
 
-Gnuastro contains a large number of programs and it is natural to forget
-the details of each program's options or inputs and outputs. Therefore,
-before starting the analysis steps of this tutorial, let's review how you
-can access this book to refresh your memory any time you want, without
-having to take your hands off the keyboard.
+Gnuastro contains a large number of programs and it is natural to forget the 
details of each program's options or inputs and outputs.
+Therefore, before starting the analysis steps of this tutorial, let's review 
how you can access this book to refresh your memory any time you want, without 
having to take your hands off the keyboard.
 
-When you install Gnuastro, this book is also installed on your system along
-with all the programs and libraries, so you don't need an internet
-connection to to access/read it. Also, by accessing this book as described
-below, you can be sure that it corresponds to your installed version of
-Gnuastro.
+When you install Gnuastro, this book is also installed on your system along 
with all the programs and libraries, so you don't need an internet connection 
to to access/read it.
+Also, by accessing this book as described below, you can be sure that it 
corresponds to your installed version of Gnuastro.
 
 @cindex GNU Info
-GNU Info@footnote{GNU Info is already available on almost all Unix-like
-operating systems.} is the program in charge of displaying the manual on
-the command-line (for more, see @ref{Info}). To see this whole book on your
-command-line, please run the following command and press subsequent
-keys. Info has its own mini-environment, therefore we'll show the keys that
-must be pressed in the mini-environment after a @code{->} sign. You can
-also ignore anything after the @code{#} sign in the middle of the line,
-they are only for your information.
+GNU Info@footnote{GNU Info is already available on almost all Unix-like 
operating systems.} is the program in charge of displaying the manual on the 
command-line (for more, see @ref{Info}).
+To see this whole book on your command-line, please run the following command 
and press subsequent keys.
+Info has its own mini-environment, therefore we'll show the keys that must be 
pressed in the mini-environment after a @code{->} sign.
+You can also ignore anything after the @code{#} sign in the middle of the 
line, they are only for your information.
 
 @example
 $ info gnuastro                # Open the top of the manual.
@@ -2499,62 +1968,47 @@ $ info gnuastro                # Open the top of the 
manual.
 -> q                           # Quit Info, return to the command-line.
 @end example
 
-The thing that greatly simplifies navigation in Info is the links (regions
-with an underline). You can immediately go to the next link in the page
-with the @key{<TAB>} key and press @key{<ENTER>} on it to go into that part
-of the manual. Try the commands above again, but this time also use
-@key{<TAB>} to go to the links and press @key{<ENTER>} on them to go to the
-respective section of the book. Then follow a few more links and go deeper
-into the book. To return to the previous page, press @key{l} (small L). If
-you are searching for a specific phrase in the whole book (for example an
-option name), press @key{s} and type your search phrase and end it with an
-@key{<ENTER>}.
+The thing that greatly simplifies navigation in Info is the links (regions 
with an underline).
+You can immediately go to the next link in the page with the @key{<TAB>} key 
and press @key{<ENTER>} on it to go into that part of the manual.
+Try the commands above again, but this time also use @key{<TAB>} to go to the 
links and press @key{<ENTER>} on them to go to the respective section of the 
book.
+Then follow a few more links and go deeper into the book.
+To return to the previous page, press @key{l} (small L).
+If you are searching for a specific phrase in the whole book (for example an 
option name), press @key{s} and type your search phrase and end it with an 
@key{<ENTER>}.
 
-You don't need to start from the top of the manual every time. For example,
-to get to @ref{Invoking astnoisechisel}, run the following command. In
-general, all programs have such an ``Invoking ProgramName'' section in this
-book. These sections are specifically for the description of inputs,
-outputs and configuration options of each program. You can access them
-directly for each program by giving its executable name to Info.
+You don't need to start from the top of the manual every time.
+For example, to get to @ref{Invoking astnoisechisel}, run the following 
command.
+In general, all programs have such an ``Invoking ProgramName'' section in this 
book.
+These sections are specifically for the description of inputs, outputs and 
configuration options of each program.
+You can access them directly for each program by giving its executable name to 
Info.
 
 @example
 $ info astnoisechisel
 @end example
 
-The other sections don't have such shortcuts. To directly access them from
-the command-line, you need to tell Info to look into Gnuastro's manual,
-then look for the specific section (an unambiguous title is necessary). For
-example, if you only want to review/remember NoiseChisel's @ref{Detection
-options}), just run the following command. Note how case is irrelevant for
-Info when calling a title in this manner.
+The other sections don't have such shortcuts.
+To directly access them from the command-line, you need to tell Info to look 
into Gnuastro's manual, then look for the specific section (an unambiguous 
title is necessary).
+For example, if you only want to review/remember NoiseChisel's @ref{Detection 
options}), just run the following command.
+Note how case is irrelevant for Info when calling a title in this manner.
 
 @example
 $ info gnuastro "Detection options"
 @end example
 
-In general, Info is a powerful and convenient way to access this whole book
-with detailed information about the programs you are running. If you are
-not already familiar with it, please run the following command and just
-read along and do what it says to learn it. Don't stop until you feel
-sufficiently fluent in it. Please invest the half an hour's time necessary
-to start using Info comfortably. It will greatly improve your productivity
-and you will start reaping the rewards of this investment very soon.
+In general, Info is a powerful and convenient way to access this whole book 
with detailed information about the programs you are running.
+If you are not already familiar with it, please run the following command and 
just read along and do what it says to learn it.
+Don't stop until you feel sufficiently fluent in it.
+Please invest the half an hour's time necessary to start using Info 
comfortably.
+It will greatly improve your productivity and you will start reaping the 
rewards of this investment very soon.
 
 @example
 $ info info
 @end example
 
-As a good scientist you need to feel comfortable to play with the
-features/options and avoid (be critical to) using default values as much as
-possible. On the other hand, our human memory is limited, so it is
-important to be able to easily access any part of this book fast and
-remember the option names, what they do and their acceptable values.
+As a good scientist you need to feel comfortable to play with the 
features/options and avoid (be critical to) using default values as much as 
possible.
+On the other hand, our human memory is limited, so it is important to be able 
to easily access any part of this book fast and remember the option names, what 
they do and their acceptable values.
 
-If you just want the option names and a short description, calling the
-program with the @option{--help} option might also be a good solution like
-the first example below. If you know a few characters of the option name,
-you can feed the output to @command{grep} like the second or third example
-commands.
+If you just want the option names and a short description, calling the program 
with the @option{--help} option might also be a good solution like the first 
example below.
+If you know a few characters of the option name, you can feed the output to 
@command{grep} like the second or third example commands.
 
 @example
 $ astnoisechisel --help
@@ -2565,28 +2019,23 @@ $ astnoisechisel --help | grep check
 @node Setup and data download, Dataset inspection and cropping, Accessing 
documentation, General program usage tutorial
 @subsection Setup and data download
 
-The first step in the analysis of the tutorial is to download the necessary
-input datasets. First, to keep things clean, let's create a
-@file{gnuastro-tutorial} directory and continue all future steps in it:
+The first step in the analysis of the tutorial is to download the necessary 
input datasets.
+First, to keep things clean, let's create a @file{gnuastro-tutorial} directory 
and continue all future steps in it:
 
 @example
 $ mkdir gnuastro-tutorial
 $ cd gnuastro-tutorial
 @end example
 
-We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3,
-Wide Field Camera} dataset. If you already have them in another directory
-(for example @file{XDFDIR}, with the same FITS file names), you can set the
-@file{download} directory to be a symbolic link to @file{XDFDIR} with a
-command like this:
+We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide 
Field Camera} dataset.
+If you already have them in another directory (for example @file{XDFDIR}, with 
the same FITS file names), you can set the @file{download} directory to be a 
symbolic link to @file{XDFDIR} with a command like this:
 
 @example
 $ ln -s XDFDIR download
 @end example
 
 @noindent
-Otherwise, when the following images aren't already present on your system,
-you can make a @file{download} directory and download them there.
+Otherwise, when the following images aren't already present on your system, 
you can make a @file{download} directory and download them there.
 
 @example
 $ mkdir download
@@ -2598,15 +2047,11 @@ $ cd ..
 @end example
 
 @noindent
-In this tutorial, we'll just use these two filters. Later, you may need to
-download more filters. To do that, you can use the shell's @code{for} loop
-to download them all in series (one after the other@footnote{Note that you
-only have one port to the internet, so downloading in parallel will
-actually be slower than downloading in series.}) with one command like the
-one below for the WFC3 filters. Put this command instead of the two
-@code{wget} commands above. Recall that all the extra spaces, back-slashes
-(@code{\}), and new lines can be ignored if you are typing on the lines on
-the terminal.
+In this tutorial, we'll just use these two filters.
+Later, you may need to download more filters.
+To do that, you can use the shell's @code{for} loop to download them all in 
series (one after the other@footnote{Note that you only have one port to the 
internet, so downloading in parallel will actually be slower than downloading 
in series.}) with one command like the one below for the WFC3 filters.
+Put this command instead of the two @code{wget} commands above.
+Recall that all the extra spaces, back-slashes (@code{\}), and new lines can 
be ignored if you are typing on the lines on the terminal.
 
 @example
 $ for f in f105w f125w f140w f160w; do                              \
@@ -2618,49 +2063,33 @@ $ for f in f105w f125w f140w f160w; do                  
            \
 @node Dataset inspection and cropping, Angular coverage on the sky, Setup and 
data download, General program usage tutorial
 @subsection Dataset inspection and cropping
 
-First, let's visually inspect the datasets we downloaded in @ref{Setup and
-data download}. Let's take F160W image as an example. Do the steps below
-with the other image(s) too (and later with any dataset that you want to
-work on). It is very important to get a good visual feeling of the dataset
-you intend to use. Also, note how SAO DS9 (used here for visual inspection
-of FITS images) doesn't follow the GNU style of options where ``long'' and
-``short'' options are preceded by @option{--} and @option{-} respectively
-(for example @option{--width} and @option{-w}, see @ref{Options}).
-
-Run the command below to see the F160W image with DS9. Ds9's
-@option{-zscale} scaling is good to visually highlight the low surface
-brightness regions, and as the name suggests, @option{-zoom to fit} will
-fit the whole dataset in the window. If the window is too small, expand it
-with your mouse, then press the ``zoom'' button on the top row of buttons
-above the image. Afterwards, in the bottom row of buttons, press ``zoom
-fit''. You can also zoom in and out by scrolling your mouse or the
-respective operation on your touch-pad when your cursor/pointer is over the
-image.
+First, let's visually inspect the datasets we downloaded in @ref{Setup and 
data download}.
+Let's take F160W image as an example.
+Do the steps below with the other image(s) too (and later with any dataset 
that you want to work on).
+It is very important to get a good visual feeling of the dataset you intend to 
use.
+Also, note how SAO DS9 (used here for visual inspection of FITS images) 
doesn't follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example 
@option{--width} and @option{-w}, see @ref{Options}).
+
+Run the command below to see the F160W image with DS9.
+Ds9's @option{-zscale} scaling is good to visually highlight the low surface 
brightness regions, and as the name suggests, @option{-zoom to fit} will fit 
the whole dataset in the window.
+If the window is too small, expand it with your mouse, then press the ``zoom'' 
button on the top row of buttons above the image.
+Afterwards, in the bottom row of buttons, press ``zoom fit''.
+You can also zoom in and out by scrolling your mouse or the respective 
operation on your touch-pad when your cursor/pointer is over the image.
 
 @example
 $ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits     \
       -zscale -zoom to fit
 @end example
 
-As you hover your mouse over the image, notice how the ``Value'' and
-positional fields on the top of the ds9 window get updated. The first thing
-you might notice is that when you hover the mouse over the regions with no
-data, they have a value of zero. The next thing might be that the dataset
-actually has two ``depth''s (see @ref{Quantifying measurement
-limits}). Recall that this is a combined/reduced image of many exposures,
-and the parts that have more exposures are deeper. In particular, the
-exposure time of the deep inner region is larger than 4 times of the outer
-(more shallower) parts.
+As you hover your mouse over the image, notice how the ``Value'' and 
positional fields on the top of the ds9 window get updated.
+The first thing you might notice is that when you hover the mouse over the 
regions with no data, they have a value of zero.
+The next thing might be that the dataset actually has two ``depth''s (see 
@ref{Quantifying measurement limits}).
+Recall that this is a combined/reduced image of many exposures, and the parts 
that have more exposures are deeper.
+In particular, the exposure time of the deep inner region is larger than 4 
times of the outer (more shallower) parts.
 
-To simplify the analysis in this tutorial, we'll only be working on the
-deep field, so let's crop it out of the full dataset. Fortunately the XDF
-survey webpage (above) contains the vertices of the deep flat WFC3-IR
-field. With Gnuastro's Crop program@footnote{To learn more about the crop
-program see @ref{Crop}.}, you can use those vertices to cutout this deep
-region from the larger image. But before that, to keep things organized,
-let's make a directory called @file{flat-ir} and keep the flat
-(single-depth) regions in that directory (with a `@file{xdf-}' suffix for a
-shorter and easier filename).
+To simplify the analysis in this tutorial, we'll only be working on the deep 
field, so let's crop it out of the full dataset.
+Fortunately the XDF survey webpage (above) contains the vertices of the deep 
flat WFC3-IR field.
+With Gnuastro's Crop program@footnote{To learn more about the crop program see 
@ref{Crop}.}, you can use those vertices to cutout this deep region from the 
larger image.
+But before that, to keep things organized, let's make a directory called 
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with 
a `@file{xdf-}' suffix for a shorter and easier filename).
 
 @example
 $ mkdir flat-ir
@@ -2674,15 +2103,11 @@ $ astcrop --mode=wcs -h0 
--output=flat-ir/xdf-f160w.fits              \
           download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
 @end example
 
-The only thing varying in the two calls to Gnuastro's Crop program is the
-filter name. Therefore, to simplify the command, and later allow work on
-more filters, we can use the shell's @code{for} loop. Notice how the two
-places where the filter names (@file{f105w} and @file{f160w}) are used
-above have been replaced with @file{$f} (the shell variable that @code{for}
-will update in every loop) below. In such cases, you should generally avoid
-repeating a command manually and use loops like below. To generalize this
-for more filters later, you can simply add the other filter names in the
-first line before the semi-colon (@code{;}).
+The only thing varying in the two calls to Gnuastro's Crop program is the 
filter name.
+Therefore, to simplify the command, and later allow work on more filters, we 
can use the shell's @code{for} loop.
+Notice how the two places where the filter names (@file{f105w} and 
@file{f160w}) are used above have been replaced with @file{$f} (the shell 
variable that @code{for} will update in every loop) below.
+In such cases, you should generally avoid repeating a command manually and use 
loops like below.
+To generalize this for more filters later, you can simply add the other filter 
names in the first line before the semi-colon (@code{;}).
 
 @example
 $ rm flat-ir/*.fits
@@ -2694,22 +2119,14 @@ $ for f in f105w f160w; do                              
              \
   done
 @end example
 
-Please open these images and inspect them with the same @command{ds9}
-command you used above. You will see how it is nicely flat now and doesn't
-have varying depths. Another important result of this crop is that regions
-with no data now have a NaN (Not-a-Number, or a blank value) value, not
-zero. Zero is a number, and is thus meaningful, especially when you later
-want to NoiseChisel@footnote{As you will see below, unlike most other
-detection algorithms, NoiseChisel detects the objects from their faintest
-parts, it doesn't start with their high signal-to-noise ratio peaks. Since
-the Sky is already subtracted in many images and noise fluctuates around
-zero, zero is commonly higher than the initial threshold applied. Therefore
-not ignoring zero-valued pixels in this image, will cause them to part of
-the detections!}. Generally, when you want to ignore some pixels in a
-dataset, and avoid higher-level ambiguities or complications, it is always
-best to give them blank values (not zero, or some other absurdly large or
-small number). Gnuastro has the Arithmetic program for such cases, and
-we'll introduce it during this tutorial.
+Please open these images and inspect them with the same @command{ds9} command 
you used above.
+You will see how it is nicely flat now and doesn't have varying depths.
+Another important result of this crop is that regions with no data now have a 
NaN (Not-a-Number, or a blank value) value, not zero.
+Zero is a number, and is thus meaningful, especially when you later want to 
NoiseChisel@footnote{As you will see below, unlike most other detection 
algorithms, NoiseChisel detects the objects from their faintest parts, it 
doesn't start with their high signal-to-noise ratio peaks.
+Since the Sky is already subtracted in many images and noise fluctuates around 
zero, zero is commonly higher than the initial threshold applied.
+Therefore not ignoring zero-valued pixels in this image, will cause them to 
part of the detections!}.
+Generally, when you want to ignore some pixels in a dataset, and avoid 
higher-level ambiguities or complications, it is always best to give them blank 
values (not zero, or some other absurdly large or small number).
+Gnuastro has the Arithmetic program for such cases, and we'll introduce it 
during this tutorial.
 
 @node Angular coverage on the sky, Cosmological coverage, Dataset inspection 
and cropping, General program usage tutorial
 @subsection Angular coverage on the sky
@@ -2717,44 +2134,29 @@ we'll introduce it during this tutorial.
 @cindex @code{CDELT}
 @cindex Coordinate scales
 @cindex Scales, coordinate
-This is the deepest image we currently have of the sky. The first thing
-that comes to mind may be this: ``How large is this field on the
-sky?''. The FITS world coordinate system (WCS) meta data standard contains
-the key to answering this question: the @code{CDELT} keyword@footnote{In
-the FITS standard, the @code{CDELT} keywords (@code{CDELT1} and
-@code{CDELT2} in a 2D image) specify the scales of each coordinate. In the
-case of this image it is in units of degrees-per-pixel. See Section 8 of
-the @url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf,
-FITS standard} for more. In short, with the @code{CDELT} convention,
-rotation (@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are
-separated. In the FITS standard the @code{CDELT} keywords are
-optional. When @code{CDELT} keywords aren't present, the @code{PC} matrix
-is assumed to contain @emph{both} the coordinate rotation and scales. Note
-that not all FITS writers use the @code{CDELT} convention. So you might not
-find the @code{CDELT} keywords in the WCS meta data of some FITS
-files. However, all Gnuastro programs (which use the default FITS keyword
-writing format of WCSLIB) write their output WCS with the @code{CDELT}
-convention, even if the input doesn't have it. If your dataset doesn't use
-the @code{CDELT} convention, you can feed it to any (simple) Gnuastro
-program (for example Arithmetic) and the output will have the @code{CDELT}
-keyword.}.
-
-With the commands below, we'll use @code{CDELT} (along with the image size)
-to find the answer. The lines starting with @code{##} are just comments for
-you to read and understand each command. Don't type them on the
-terminal. The commands are intentionally repetitive in some places to
-better understand each step and also to demonstrate the beauty of
-command-line features like history, variables, pipes and loops (which you
-will commonly use as you master the command-line).
+This is the deepest image we currently have of the sky.
+The first thing that comes to mind may be this: ``How large is this field on 
the sky?''.
+The FITS world coordinate system (WCS) meta data standard contains the key to 
answering this question: the @code{CDELT} keyword@footnote{In the FITS 
standard, the @code{CDELT} keywords (@code{CDELT1} and @code{CDELT2} in a 2D 
image) specify the scales of each coordinate.
+In the case of this image it is in units of degrees-per-pixel.
+See Section 8 of the 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS 
standard} for more.
+In short, with the @code{CDELT} convention, rotation (@code{PC} or @code{CD} 
keywords) and scales (@code{CDELT}) are separated.
+In the FITS standard the @code{CDELT} keywords are optional.
+When @code{CDELT} keywords aren't present, the @code{PC} matrix is assumed to 
contain @emph{both} the coordinate rotation and scales.
+Note that not all FITS writers use the @code{CDELT} convention.
+So you might not find the @code{CDELT} keywords in the WCS meta data of some 
FITS files.
+However, all Gnuastro programs (which use the default FITS keyword writing 
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even 
if the input doesn't have it.
+If your dataset doesn't use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example Arithmetic) and the output will have 
the @code{CDELT} keyword.}.
+
+With the commands below, we'll use @code{CDELT} (along with the image size) to 
find the answer.
+The lines starting with @code{##} are just comments for you to read and 
understand each command.
+Don't type them on the terminal.
+The commands are intentionally repetitive in some places to better understand 
each step and also to demonstrate the beauty of command-line features like 
history, variables, pipes and loops (which you will commonly use as you master 
the command-line).
 
 @cartouche
 @noindent
-@strong{Use shell history:} Don't forget to make effective use of your
-shell's history: you don't have to re-type previous command to add
-something to them. This is especially convenient when you just want to make
-a small change to your previous command. Press the ``up'' key on your
-keyboard (possibly multiple times) to see your previous command(s) and
-modify them accordingly.
+@strong{Use shell history:} Don't forget to make effective use of your shell's 
history: you don't have to re-type previous command to add something to them.
+This is especially convenient when you just want to make a small change to 
your previous command.
+Press the ``up'' key on your keyboard (possibly multiple times) to see your 
previous command(s) and modify them accordingly.
 @end cartouche
 
 @example
@@ -2800,35 +2202,26 @@ $ echo $n $r
 $ echo $n $r | awk '@{print $1 * ($2^2) * 3600@}'
 @end example
 
-The output of the last command (area of this field) is 4.03817 (or
-approximately 4.04) arc-minutes squared. Just for comparison, this is
-roughly 175 times smaller than the average moon's angular area (with a
-diameter of 30arc-minutes or half a degree).
+The output of the last command (area of this field) is 4.03817 (or 
approximately 4.04) arc-minutes squared.
+Just for comparison, this is roughly 175 times smaller than the average moon's 
angular area (with a diameter of 30arc-minutes or half a degree).
 
 @cindex GNU AWK
 @cartouche
 @noindent
-@strong{AWK for table/value processing:} As you saw above AWK is a powerful
-and simple tool for text processing. You will see it often in shell
-scripts. GNU AWK (the most common implementation) comes with a free and
-wonderful @url{https://www.gnu.org/software/gawk/manual/, book} in the same
-format as this book which will allow you to master it nicely. Just like
-this manual, you can also access GNU AWK's manual on the command-line
-whenever necessary without taking your hands off the keyboard. Just run
-@code{info awk}.
+@strong{AWK for table/value processing:} As you saw above AWK is a powerful 
and simple tool for text processing.
+You will see it often in shell scripts.
+GNU AWK (the most common implementation) comes with a free and wonderful 
@url{https://www.gnu.org/software/gawk/manual/, book} in the same format as 
this book which will allow you to master it nicely.
+Just like this manual, you can also access GNU AWK's manual on the 
command-line whenever necessary without taking your hands off the keyboard.
+Just run @code{info awk}.
 @end cartouche
 
 
 @node Cosmological coverage, Building custom programs with the library, 
Angular coverage on the sky, General program usage tutorial
 @subsection Cosmological coverage
-Having found the angular coverage of the dataset in @ref{Angular coverage
-on the sky}, we can now use Gnuastro to answer a more physically motivated
-question: ``How large is this area at different redshifts?''. To get a
-feeling of the tangential area that this field covers at redshift 2, you
-can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}). In
-particular, you need the tangential distance covered by 1 arc-second as raw
-output. Combined with the field's area that was measured before, we can
-calculate the tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
+Having found the angular coverage of the dataset in @ref{Angular coverage on 
the sky}, we can now use Gnuastro to answer a more physically motivated 
question: ``How large is this area at different redshifts?''.
+To get a feeling of the tangential area that this field covers at redshift 2, 
you can use Gnuastro's CosmicCalcular program (@ref{CosmicCalculator}).
+In particular, you need the tangential distance covered by 1 arc-second as raw 
output.
+Combined with the field's area that was measured before, we can calculate the 
tangential distance in Mega Parsecs squared (@mymath{Mpc^2}).
 
 @example
 ## Print general cosmological properties at redshift 2 (for example).
@@ -2864,9 +2257,8 @@ $ echo $k $a | awk '@{print $1 * $2 / 1e6@}'
 @end example
 
 @noindent
-At redshift 2, this field therefore covers approximately 1.07
-@mymath{Mpc^2}. If you would like to see how this tangential area changes
-with redshift, you can use a shell loop like below.
+At redshift 2, this field therefore covers approximately 1.07 @mymath{Mpc^2}.
+If you would like to see how this tangential area changes with redshift, you 
can use a shell loop like below.
 
 @example
 $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do        \
@@ -2876,11 +2268,9 @@ $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do   
     \
 @end example
 
 @noindent
-Fortunately, the shell has a useful tool/program to print a sequence of
-numbers that is nicely called @code{seq}. You can use it instead of typing
-all the different redshifts in this example. For example the loop below
-will calculate and print the tangential coverage of this field across a
-larger range of redshifts (0.1 to 5) and with finer increments of 0.1.
+Fortunately, the shell has a useful tool/program to print a sequence of 
numbers that is nicely called @code{seq}.
+You can use it instead of typing all the different redshifts in this example.
+For example the loop below will calculate and print the tangential coverage of 
this field across a larger range of redshifts (0.1 to 5) and with finer 
increments of 0.1.
 
 @example
 $ for z in $(seq 0.1 0.1 5); do                                  \
@@ -2892,30 +2282,18 @@ $ for z in $(seq 0.1 0.1 5); do                         
         \
 
 @node Building custom programs with the library, Option management and 
configuration files, Cosmological coverage, General program usage tutorial
 @subsection Building custom programs with the library
-In @ref{Cosmological coverage}, we repeated a certain calculation/output of
-a program multiple times using the shell's @code{for} loop. This simple way
-repeating a calculation is great when it is only necessary once. However,
-if you commonly need this calculation and possibly for a larger number of
-redshifts at higher precision, the command above can be slow (try it out to
-see).
-
-This slowness of the repeated calls to a generic program (like
-CosmicCalculator), is because it can have a lot of overhead on each
-call. To be generic and easy to operate, it has to parse the command-line
-and all configuration files (see @ref{Option management and configuration
-files}) which contain human-readable characters and need a lot of
-pre-processing to be ready for processing by the computer. Afterwards,
-CosmicCalculator has to check the sanity of its inputs and check which of
-its many options you have asked for. All the this pre-processing takes as
-much time as the high-level calculation you are requesting, and it has to
-re-do all of these for every redshift in your loop.
-
-To greatly speed up the processing, you can directly access the core
-work-horse of CosmicCalculator without all that overhead by designing your
-custom program for this job. Using Gnuastro's library, you can write your
-own tiny program particularly designed for this exact calculation (and
-nothing else!). To do that, copy and paste the following C program in a
-file called @file{myprogram.c}.
+In @ref{Cosmological coverage}, we repeated a certain calculation/output of a 
program multiple times using the shell's @code{for} loop.
+This simple way repeating a calculation is great when it is only necessary 
once.
+However, if you commonly need this calculation and possibly for a larger 
number of redshifts at higher precision, the command above can be slow (try it 
out to see).
+
+This slowness of the repeated calls to a generic program (like 
CosmicCalculator), is because it can have a lot of overhead on each call.
+To be generic and easy to operate, it has to parse the command-line and all 
configuration files (see @ref{Option management and configuration files}) which 
contain human-readable characters and need a lot of pre-processing to be ready 
for processing by the computer.
+Afterwards, CosmicCalculator has to check the sanity of its inputs and check 
which of its many options you have asked for.
+All the this pre-processing takes as much time as the high-level calculation 
you are requesting, and it has to re-do all of these for every redshift in your 
loop.
+
+To greatly speed up the processing, you can directly access the core 
work-horse of CosmicCalculator without all that overhead by designing your 
custom program for this job.
+Using Gnuastro's library, you can write your own tiny program particularly 
designed for this exact calculation (and nothing else!).
+To do that, copy and paste the following C program in a file called 
@file{myprogram.c}.
 
 @example
 #include <math.h>
@@ -2959,46 +2337,31 @@ $ astbuildprog myprogram.c
 @end example
 
 @noindent
-In the command above, you used Gnuastro's BuildProgram program. Its job is
-to greatly simplify the compilation, linking and running of simple C
-programs that use Gnuastro's library (like this one). BuildProgram is
-designed to manage Gnuastro's dependencies, compile and link your custom
-program and then run it.
+In the command above, you used Gnuastro's BuildProgram program.
+Its job is to greatly simplify the compilation, linking and running of simple 
C programs that use Gnuastro's library (like this one).
+BuildProgram is designed to manage Gnuastro's dependencies, compile and link 
your custom program and then run it.
 
-Did you notice how your custom program was much faster than the repeated
-calls to CosmicCalculator in the previous section? You might have noticed
-that a new file called @file{myprogram} is also created in the
-directory. This is the compiled program that was created and run by the
-command above (its in binary machine code format, not human-readable any
-more). You can run it again to get the same results with a command like
-this:
+Did you notice how your custom program was much faster than the repeated calls 
to CosmicCalculator in the previous section? You might have noticed that a new 
file called @file{myprogram} is also created in the directory.
+This is the compiled program that was created and run by the command above 
(its in binary machine code format, not human-readable any more).
+You can run it again to get the same results with a command like this:
 
 @example
 $ ./myprogram
 @end example
 
-The efficiency of your custom @file{myprogram} compared to repeated calls
-to CosmicCalculator is because in the latter, the requested processing is
-comparable to the necessary overheads. For other programs that take large
-input datasets and do complicated processing on them, the overhead is
-usually negligible compared to the processing. In such cases, the libraries
-are only useful if you want a different/new processing compared to the
-functionalities in Gnuastro's existing programs.
+The efficiency of your custom @file{myprogram} compared to repeated calls to 
CosmicCalculator is because in the latter, the requested processing is 
comparable to the necessary overheads.
+For other programs that take large input datasets and do complicated 
processing on them, the overhead is usually negligible compared to the 
processing.
+In such cases, the libraries are only useful if you want a different/new 
processing compared to the functionalities in Gnuastro's existing programs.
 
-Gnuastro has a large library which is used extensively by all the
-programs. In other words, the library is like the skeleton of Gnuastro. For
-the full list of available functions classified by context, please see
-@ref{Gnuastro library}. Gnuastro's library and BuildProgram are created to
-make it easy for you to use these powerful features as you like. This gives
-you a high level of creativity, while also providing efficiency and
-robustness. Several other complete working examples (involving images and
-tables) of Gnuastro's libraries can be see in @ref{Library demo
-programs}.
+Gnuastro has a large library which is used extensively by all the programs.
+In other words, the library is like the skeleton of Gnuastro.
+For the full list of available functions classified by context, please see 
@ref{Gnuastro library}.
+Gnuastro's library and BuildProgram are created to make it easy for you to use 
these powerful features as you like.
+This gives you a high level of creativity, while also providing efficiency and 
robustness.
+Several other complete working examples (involving images and tables) of 
Gnuastro's libraries can be see in @ref{Library demo programs}.
 
-But for this tutorial, let's stop discussing the libraries at this point in
-and get back to Gnuastro's already built programs which don't need any
-programming. But before continuing, let's clean up the files we don't need
-any more:
+But for this tutorial, let's stop discussing the libraries at this point in 
and get back to Gnuastro's already built programs which don't need any 
programming.
+But before continuing, let's clean up the files we don't need any more:
 
 @example
 $ rm myprogram*
@@ -3007,61 +2370,43 @@ $ rm myprogram*
 
 @node Option management and configuration files, Warping to a new pixel grid, 
Building custom programs with the library, General program usage tutorial
 @subsection Option management and configuration files
-None of Gnuastro's programs keep a default value internally within their
-code. However, when you ran CosmicCalculator only with the @option{-z2}
-option (not specifying the cosmological parameters) in @ref{Cosmological
-coverage}, it completed its processing and printed results. Where did the
-necessary cosmological parameters (like the matter density, etc) that
-are necessary for its calculations come from? Fast reply: the values come
-from a configuration file (see @ref{Configuration file precedence}).
-
-CosmicCalculator is a small program with a limited set of
-parameters/options. Therefore, let's use it to discuss configuration files
-in Gnuastro (for more, you can always see @ref{Configuration
-files}). Configuration files are an important part of all Gnuastro's
-programs, especially the ones with a large number of options, so its
-important to understand this part well .
-
-Once you get comfortable with configuration files here, you can make good
-use of them in all Gnuastro programs (for example, NoiseChisel). For
-example, to do optimal detection on various datasets, you can have
-configuration files for different noise properties. The configuration of
-each program (besides its version) is vital for the reproducibility of your
-results, so it is important to manage them properly.
-
-As we saw above, the full list of the options in all Gnuastro programs can
-be seen with the @option{--help} option. Try calling it with
-CosmicCalculator as shown below. Note how options are grouped by context to
-make it easier to find your desired option. However, in each group, options
-are ordered alphabetically.
+None of Gnuastro's programs keep a default value internally within their code.
+However, when you ran CosmicCalculator only with the @option{-z2} option (not 
specifying the cosmological parameters) in @ref{Cosmological coverage}, it 
completed its processing and printed results.
+Where did the necessary cosmological parameters (like the matter density, etc) 
that are necessary for its calculations come from? Fast reply: the values come 
from a configuration file (see @ref{Configuration file precedence}).
+
+CosmicCalculator is a small program with a limited set of parameters/options.
+Therefore, let's use it to discuss configuration files in Gnuastro (for more, 
you can always see @ref{Configuration files}).
+Configuration files are an important part of all Gnuastro's programs, 
especially the ones with a large number of options, so its important to 
understand this part well .
+
+Once you get comfortable with configuration files here, you can make good use 
of them in all Gnuastro programs (for example, NoiseChisel).
+For example, to do optimal detection on various datasets, you can have 
configuration files for different noise properties.
+The configuration of each program (besides its version) is vital for the 
reproducibility of your results, so it is important to manage them properly.
+
+As we saw above, the full list of the options in all Gnuastro programs can be 
seen with the @option{--help} option.
+Try calling it with CosmicCalculator as shown below.
+Note how options are grouped by context to make it easier to find your desired 
option.
+However, in each group, options are ordered alphabetically.
 
 @example
 $ astcosmiccal --help
 @end example
 
 @noindent
-The options that need a value have an @key{=} sign after their long version
-and @code{FLT}, @code{INT} or @code{STR} for floating point numbers,
-integer numbers, and strings (filenames for example) respectively. All
-options have a long format and some have a short format (a single
-character), for more see @ref{Options}.
+The options that need a value have an @key{=} sign after their long version 
and @code{FLT}, @code{INT} or @code{STR} for floating point numbers, integer 
numbers, and strings (filenames for example) respectively.
+All options have a long format and some have a short format (a single 
character), for more see @ref{Options}.
 
-When you are using a program, it is often necessary to check the value the
-option has just before the program starts its processing. In other words,
-after it has parsed the command-line options and all configuration
-files. You can see the values of all options that need one with the
-@option{--printparams} or @code{-P} option. @option{--printparams} is
-common to all programs (see @ref{Common options}). In the command below,
-try replacing @code{-P} with @option{--printparams} to see how both do the
-same operation.
+When you are using a program, it is often necessary to check the value the 
option has just before the program starts its processing.
+In other words, after it has parsed the command-line options and all 
configuration files.
+You can see the values of all options that need one with the 
@option{--printparams} or @code{-P} option.
+@option{--printparams} is common to all programs (see @ref{Common options}).
+In the command below, try replacing @code{-P} with @option{--printparams} to 
see how both do the same operation.
 
 @example
 $ astcosmiccal -P
 @end example
 
-Let's say you want a different Hubble constant. Try running the following
-command (just adding @option{--H0=70} after the command above) to see how
-the Hubble constant in the output of the command above has changed.
+Let's say you want a different Hubble constant.
+Try running the following command (just adding @option{--H0=70} after the 
command above) to see how the Hubble constant in the output of the command 
above has changed.
 
 @example
 $ astcosmiccal -P --H0=70
@@ -3075,12 +2420,9 @@ calculations with the new cosmology (or configuration).
 $ astcosmiccal --H0=70 -z2
 @end example
 
-From the output of the @code{--help} option, note how the option for Hubble
-constant has both short (@code{-H}) and long (@code{--H0}) formats. One
-final note is that the equal (@key{=}) sign is not mandatory. In the short
-format, the value can stick to the actual option (the short option name is
-just one character after-all, thus easily identifiable) and in the long
-format, a white-space character is also enough.
+From the output of the @code{--help} option, note how the option for Hubble 
constant has both short (@code{-H}) and long (@code{--H0}) formats.
+One final note is that the equal (@key{=}) sign is not mandatory.
+In the short format, the value can stick to the actual option (the short 
option name is just one character after-all, thus easily identifiable) and in 
the long format, a white-space character is also enough.
 
 @example
 $ astcosmiccal -H70    -z2
@@ -3088,35 +2430,28 @@ $ astcosmiccal --H0 70 -z2 --arcsectandist
 @end example
 
 @noindent
-When an option doesn't need a value, and has a short format (like
-@option{--arcsectandist}), you can easily append it @emph{before} other
-short options. So the last command above can also be written as:
+When an option doesn't need a value, and has a short format (like 
@option{--arcsectandist}), you can easily append it @emph{before} other short 
options.
+So the last command above can also be written as:
 
 @example
 $ astcosmiccal --H0 70 -sz2
 @end example
 
-Let's assume that in one project, you want to only use rounded cosmological
-parameters (H0 of 70km/s/Mpc and matter density of 0.3). You should
-therefore run CosmicCalculator like this:
+Let's assume that in one project, you want to only use rounded cosmological 
parameters (H0 of 70km/s/Mpc and matter density of 0.3).
+You should therefore run CosmicCalculator like this:
 
 @example
 $ astcosmiccal --H0=70 --olambda=0.7 --omatter=0.3 -z2
 @end example
 
-But having to type these extra options every time you run CosmicCalculator
-will be prone to errors (typos in particular), frustrating and
-slow. Therefore in Gnuastro, you can put all the options and their values
-in a ``Configuration file'' and tell the programs to read the option values
-from there.
+But having to type these extra options every time you run CosmicCalculator 
will be prone to errors (typos in particular), frustrating and slow.
+Therefore in Gnuastro, you can put all the options and their values in a 
``Configuration file'' and tell the programs to read the option values from 
there.
 
-Let's create a configuration file... With your favorite text editor, make a
-file named @file{my-cosmology.conf} (or @file{my-cosmology.txt}, the suffix
-doesn't matter, but a more descriptive suffix like @file{.conf} is
-recommended). Then put the following lines inside of it. One space between
-the option value and name is enough, the values are just under each other
-to help in readability. Also note that you can only use long option names
-in configuration files.
+Let's create a configuration file...
+With your favorite text editor, make a file named @file{my-cosmology.conf} (or 
@file{my-cosmology.txt}, the suffix doesn't matter, but a more descriptive 
suffix like @file{.conf} is recommended).
+Then put the following lines inside of it.
+One space between the option value and name is enough, the values are just 
under each other to help in readability.
+Also note that you can only use long option names in configuration files.
 
 @example
 H0       70
@@ -3125,31 +2460,23 @@ omatter  0.3
 @end example
 
 @noindent
-You can now tell CosmicCalculator to read this file for option values
-immediately using the @option{--config} option as shown below. Do you see
-how the output of the following command corresponds to the option values in
-@file{my-cosmology.conf}, and is therefore identical to the previous
-command?
+You can now tell CosmicCalculator to read this file for option values 
immediately using the @option{--config} option as shown below.
+Do you see how the output of the following command corresponds to the option 
values in @file{my-cosmology.conf}, and is therefore identical to the previous 
command?
 
 @example
 $ astcosmiccal --config=my-cosmology.conf -z2
 @end example
 
-But still, having to type @option{--config=my-cosmology.conf} every time is
-annoying, isn't it? If you need this cosmology every time you are working
-in a specific directory, you can use Gnuastro's default configuration file
-names and avoid having to type it manually.
+But still, having to type @option{--config=my-cosmology.conf} every time is 
annoying, isn't it?
+If you need this cosmology every time you are working in a specific directory, 
you can use Gnuastro's default configuration file names and avoid having to 
type it manually.
 
-The default configuration files (that are checked if they exist) must be
-placed in the hidden @file{.gnuastro} sub-directory (in the same directory
-you are running the program). Their file name (within @file{.gnuastro})
-must also be the same as the program's executable name. So in the case of
-CosmicCalculator, the default configuration file in a given directory is
-@file{.gnuastro/astcosmiccal.conf}.
+The default configuration files (that are checked if they exist) must be 
placed in the hidden @file{.gnuastro} sub-directory (in the same directory you 
are running the program).
+Their file name (within @file{.gnuastro}) must also be the same as the 
program's executable name.
+So in the case of CosmicCalculator, the default configuration file in a given 
directory is @file{.gnuastro/astcosmiccal.conf}.
 
-Let's do this. We'll first make a directory for our custom cosmology, then
-build a @file{.gnuastro} within it. Finally, we'll copy the custom
-configuration file there:
+Let's do this.
+We'll first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
+Finally, we'll copy the custom configuration file there:
 
 @example
 $ mkdir my-cosmology
@@ -3157,9 +2484,7 @@ $ mkdir my-cosmology/.gnuastro
 $ mv my-cosmology.conf my-cosmology/.gnuastro/astcosmiccal.conf
 @end example
 
-Once you run CosmicCalculator within @file{my-cosmology} (as shown below),
-you will see how your custom cosmology has been implemented without having
-to type anything extra on the command-line.
+Once you run CosmicCalculator within @file{my-cosmology} (as shown below), you 
will see how your custom cosmology has been implemented without having to type 
anything extra on the command-line.
 
 @example
 $ cd my-cosmology
@@ -3167,11 +2492,9 @@ $ astcosmiccal -P
 $ cd ..
 @end example
 
-To further simplify the process, you can use the @option{--setdirconf}
-option. If you are already in your desired working directory, calling this
-option with the others will automatically write the final values (along
-with descriptions) in @file{.gnuastro/astcosmiccal.conf}. For example try
-the commands below:
+To further simplify the process, you can use the @option{--setdirconf} option.
+If you are already in your desired working directory, calling this option with 
the others will automatically write the final values (along with descriptions) 
in @file{.gnuastro/astcosmiccal.conf}.
+For example try the commands below:
 
 @example
 $ mkdir my-cosmology2
@@ -3182,18 +2505,14 @@ $ astcosmiccal -P
 $ cd ..
 @end example
 
-Gnuastro's programs also have default configuration files for a specific
-user (when run in any directory). This allows you to set a special behavior
-every time a program is run by a specific user. Only the directory and
-filename differ from the above, the rest of the process is similar to
-before. Finally, there are also system-wide configuration files that can be
-used to define the option values for all users on a system. See
-@ref{Configuration file precedence} for a more detailed discussion.
+Gnuastro's programs also have default configuration files for a specific user 
(when run in any directory).
+This allows you to set a special behavior every time a program is run by a 
specific user.
+Only the directory and filename differ from the above, the rest of the process 
is similar to before.
+Finally, there are also system-wide configuration files that can be used to 
define the option values for all users on a system.
+See @ref{Configuration file precedence} for a more detailed discussion.
 
-We'll stop the discussion on configuration files here, but you can always
-read about them in @ref{Configuration files}. Before continuing the
-tutorial, let's delete the two extra directories that we don't need any
-more:
+We'll stop the discussion on configuration files here, but you can always read 
about them in @ref{Configuration files}.
+Before continuing the tutorial, let's delete the two extra directories that we 
don't need any more:
 
 @example
 $ rm -rf my-cosmology*
@@ -3202,38 +2521,30 @@ $ rm -rf my-cosmology*
 
 @node Warping to a new pixel grid, Multiextension FITS files NoiseChisel's 
output, Option management and configuration files, General program usage 
tutorial
 @subsection Warping to a new pixel grid
-We are now ready to start processing the downloaded images. The XDF
-datasets we are using here are already aligned to the same pixel
-grid. However, warping to a different/matched pixel grid is commonly needed
-before higher-level analysis when you are using datasets from different
-instruments. So let's have a look at Gnuastro's features warping features
-here.
+We are now ready to start processing the downloaded images.
+The XDF datasets we are using here are already aligned to the same pixel grid.
+However, warping to a different/matched pixel grid is commonly needed before 
higher-level analysis when you are using datasets from different instruments.
+So let's have a look at Gnuastro's features warping features here.
 
-Gnuastro's Warp program should be used for warping the pixel-grid (see
-@ref{Warp}). For example, try rotating one of the images by 20 degrees:
+Gnuastro's Warp program should be used for warping the pixel-grid (see 
@ref{Warp}).
+For example, try rotating one of the images by 20 degrees:
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --rotate=20
 @end example
 
 @noindent
-Open the output (@file{xdf-f160w_rotated.fits}) and see how it is
-rotated. If your final image is already aligned with RA and Dec, you can
-simply use the @option{--align} option and let Warp calculate the necessary
-rotation and apply it. For example, try aligning the rotated image back to
-the standard orientation (just note that because of the two rotations, the
-NaN parts of the image are larger now):
+Open the output (@file{xdf-f160w_rotated.fits}) and see how it is rotated.
+If your final image is already aligned with RA and Dec, you can simply use the 
@option{--align} option and let Warp calculate the necessary rotation and apply 
it.
+For example, try aligning the rotated image back to the standard orientation 
(just note that because of the two rotations, the NaN parts of the image are 
larger now):
 
 @example
 $ astwarp xdf-f160w_rotated.fits --align
 @end example
 
-Warp can generally be used for many kinds of pixel grid manipulation
-(warping), not just rotations. For example the outputs of the commands
-below will respectively have larger pixels (new resolution being one
-quarter the original resolution), get shifted by 2.8 (by sub-pixel), get a
-shear of 2, and be tilted (projected). Run each of them and open the output
-file to see the effect, they will become handy for you in the future.
+Warp can generally be used for many kinds of pixel grid manipulation 
(warping), not just rotations.
+For example the outputs of the commands below will respectively have larger 
pixels (new resolution being one quarter the original resolution), get shifted 
by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
+Run each of them and open the output file to see the effect, they will become 
handy for you in the future.
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --scale=0.25
@@ -3243,34 +2554,24 @@ $ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
 @end example
 
 @noindent
-If you need to do multiple warps, you can combine them in one call to
-Warp. For example to first rotate the image, then scale it, run this
-command:
+If you need to do multiple warps, you can combine them in one call to Warp.
+For example to first rotate the image, then scale it, run this command:
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
 @end example
 
-If you have multiple warps, do them all in one command. Don't warp them in
-separate commands because the correlated noise will become too strong. As
-you see in the matrix that is printed when you run Warp, it merges all the
-warps into a single warping matrix (see @ref{Merging multiple warpings})
-and simply applies that (mixes the pixel values) just once. However, if you
-run Warp multiple times, the pixels will be mixed multiple times, creating
-a strong artificial blur/smoothing, or stronger correlated noise.
+If you have multiple warps, do them all in one command.
+Don't warp them in separate commands because the correlated noise will become 
too strong.
+As you see in the matrix that is printed when you run Warp, it merges all the 
warps into a single warping matrix (see @ref{Merging multiple warpings}) and 
simply applies that (mixes the pixel values) just once.
+However, if you run Warp multiple times, the pixels will be mixed multiple 
times, creating a strong artificial blur/smoothing, or stronger correlated 
noise.
 
-Recall that the merging of multiple warps is done through matrix
-multiplication, therefore order matters in the separate operations. At a
-lower level, through Warp's @option{--matrix} option, you can directly
-request your desired final warp and don't have to break it up into
-different warps like above (see @ref{Invoking astwarp}).
+Recall that the merging of multiple warps is done through matrix 
multiplication, therefore order matters in the separate operations.
+At a lower level, through Warp's @option{--matrix} option, you can directly 
request your desired final warp and don't have to break it up into different 
warps like above (see @ref{Invoking astwarp}).
 
-Fortunately these datasets are already aligned to the same pixel grid, so
-you don't actually need the files that were just generated. You can safely
-delete them all with the following command. Here, you see why we put the
-processed outputs that we need later into a separate directory. In this
-way, the top directory can be used for temporary files for testing that you
-can simply delete with a generic command like below.
+Fortunately these datasets are already aligned to the same pixel grid, so you 
don't actually need the files that were just generated.You can safely delete 
them all with the following command.
+Here, you see why we put the processed outputs that we need later into a 
separate directory.
+In this way, the top directory can be used for temporary files for testing 
that you can simply delete with a generic command like below.
 
 @example
 $ rm *.fits
@@ -3279,80 +2580,51 @@ $ rm *.fits
 
 @node Multiextension FITS files NoiseChisel's output, NoiseChisel optimization 
for detection, Warping to a new pixel grid, General program usage tutorial
 @subsection Multiextension FITS files (NoiseChisel's output)
-Having completed a review of the basics in the previous sections, we are
-now ready to separate the signal (galaxies or stars) from the background
-noise in the image. We will be using the results of @ref{Dataset inspection
-and cropping}, so be sure you already have them. Gnuastro has NoiseChisel
-for this job. But NoiseChisel's output is a multi-extension FITS file,
-therefore to better understand how to use NoiseChisel, let's take a look at
-multi-extension FITS files and how you can interact with them.
+Having completed a review of the basics in the previous sections, we are now 
ready to separate the signal (galaxies or stars) from the background noise in 
the image.
+We will be using the results of @ref{Dataset inspection and cropping}, so be 
sure you already have them.
+Gnuastro has NoiseChisel for this job.
+But NoiseChisel's output is a multi-extension FITS file, therefore to better 
understand how to use NoiseChisel, let's take a look at multi-extension FITS 
files and how you can interact with them.
 
-In the FITS format, each extension contains a separate dataset (image in
-this case). You can get basic information about the extensions in a FITS
-file with Gnuastro's Fits program (see @ref{Fits}). To start with, let's
-run NoiseChisel without any options, then use Gnuastro's FITS program to
-inspect the number of extensions in this file.
+In the FITS format, each extension contains a separate dataset (image in this 
case).
+You can get basic information about the extensions in a FITS file with 
Gnuastro's Fits program (see @ref{Fits}).
+To start with, let's run NoiseChisel without any options, then use Gnuastro's 
FITS program to inspect the number of extensions in this file.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits
 $ astfits xdf-f160w_detected.fits
 @end example
 
-From the output list, we see that NoiseChisel's output contains 5
-extensions and the first (counting from zero, with name
-@code{NOISECHISEL-CONFIG}) is empty: it has value of @code{0} in the last
-column (which shows its size). The first extension in all the outputs of
-Gnuastro's programs only contains meta-data: data about/describing the
-datasets within (all) the output's extensions. This is recommended by the
-FITS standard, see @ref{Fits} for more. In the case of Gnuastro's programs,
-this generic zero-th/meta-data extension (for the whole file) contains all
-the configuration options of the program that created the file.
-
-The second extension of NoiseChisel's output (numbered 1, named
-@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided. The
-third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary
-image with only two possible values for all pixels: 0 for noise and 1 for
-signal. Since it only has two values, to avoid taking too much space on
-your computer, its numeric datatype an unsigned 8-bit integer (or
-@code{uint8})@footnote{To learn more about numeric data types see
-@ref{Numeric data types}.}. The fourth and fifth (@code{SKY} and
-@code{SKY_STD}) extensions, have the Sky and its standard deviation values
-for the input on a tile grid and were calculated over the undetected
-regions (for more on the importance of the Sky value, see @ref{Sky value}).
-
-Metadata regarding how the analysis was done (or a dataset was created) is
-very important for higher-level analysis and reproducibility. Therefore,
-Let's first take a closer look at the @code{NOISECHISEL-CONFIG}
-extension. If you specify a special header in the FITS file, Gnuastro's
-Fits program will print the header keywords (metadata) of that
-extension. You can either specify the HDU/extension counter (starting from
-0), or name. Therefore, the two commands below are identical for this file:
+From the output list, we see that NoiseChisel's output contains 5 extensions 
and the first (counting from zero, with name @code{NOISECHISEL-CONFIG}) is 
empty: it has value of @code{0} in the last column (which shows its size).
+The first extension in all the outputs of Gnuastro's programs only contains 
meta-data: data about/describing the datasets within (all) the output's 
extensions.
+This is recommended by the FITS standard, see @ref{Fits} for more.
+In the case of Gnuastro's programs, this generic zero-th/meta-data extension 
(for the whole file) contains all the configuration options of the program that 
created the file.
+
+The second extension of NoiseChisel's output (numbered 1, named 
@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
+The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary 
image with only two possible values for all pixels: 0 for noise and 1 for 
signal.
+Since it only has two values, to avoid taking too much space on your computer, 
its numeric datatype an unsigned 8-bit integer (or @code{uint8})@footnote{To 
learn more about numeric data types see @ref{Numeric data types}.}.
+The fourth and fifth (@code{SKY} and @code{SKY_STD}) extensions, have the Sky 
and its standard deviation values for the input on a tile grid and were 
calculated over the undetected regions (for more on the importance of the Sky 
value, see @ref{Sky value}).
+
+Metadata regarding how the analysis was done (or a dataset was created) is 
very important for higher-level analysis and reproducibility.
+Therefore, Let's first take a closer look at the @code{NOISECHISEL-CONFIG} 
extension.
+If you specify a special header in the FITS file, Gnuastro's Fits program will 
print the header keywords (metadata) of that extension.
+You can either specify the HDU/extension counter (starting from 0), or name.
+Therefore, the two commands below are identical for this file:
 
 @example
 $ astfits xdf-f160w_detected.fits -h0
 $ astfits xdf-f160w_detected.fits -hNOISECHISEL-CONFIG
 @end example
 
-The first group of FITS header keywords are standard keywords (containing
-the @code{SIMPLE} and @code{BITPIX} keywords the first empty line). They
-are required by the FITS standard and must be present in any FITS
-extension. The second group contains the input file and all the options
-with their values in that run of NoiseChisel. Finally, the last group
-contains the date and version information of Gnuastro and its
-dependencies. The ``versions and date'' group of keywords are present in
-all Gnuastro's FITS extension outputs, for more see @ref{Output FITS
-files}.
-
-Note that if a keyword name is larger than 8 characters, it is preceded by
-a @code{HIERARCH} keyword and that all keyword names are in capital
-letters. Therefore, if you want to see only one keyword's value by feeding
-the output to Grep, you should ask Grep to ignore case with its @option{-i}
-option (short name for @option{--ignore-case}). For example, below we'll
-check the value to the @option{--snminarea} option, note how we don't need
-Grep's @option{-i} option when it is fed with @command{astnoisechisel -P}
-since it is already in small-caps there. The extra white spaces in the
-first command are only to help in readability, you can ignore them when
-typing.
+The first group of FITS header keywords are standard keywords (containing the 
@code{SIMPLE} and @code{BITPIX} keywords the first empty line).
+They are required by the FITS standard and must be present in any FITS 
extension.
+The second group contains the input file and all the options with their values 
in that run of NoiseChisel.
+Finally, the last group contains the date and version information of Gnuastro 
and its dependencies.
+The ``versions and date'' group of keywords are present in all Gnuastro's FITS 
extension outputs, for more see @ref{Output FITS files}.
+
+Note that if a keyword name is larger than 8 characters, it is preceded by a 
@code{HIERARCH} keyword and that all keyword names are in capital letters.
+Therefore, if you want to see only one keyword's value by feeding the output 
to Grep, you should ask Grep to ignore case with its @option{-i} option (short 
name for @option{--ignore-case}).
+For example, below we'll check the value to the @option{--snminarea} option, 
note how we don't need Grep's @option{-i} option when it is fed with 
@command{astnoisechisel -P} since it is already in small-caps there.
+The extra white spaces in the first command are only to help in readability, 
you can ignore them when typing.
 
 @example
 $ astnoisechisel -P                   | grep    snminarea
@@ -3360,78 +2632,51 @@ $ astfits xdf-f160w_detected.fits -h0 | grep -i 
snminarea
 @end example
 
 @noindent
-The metadata (that is stored in the output) can later be used to exactly
-reproduce/understand your result, even if you have lost/forgot the command
-you used to create the file. This feature is present in all of Gnuastro's
-programs, not just NoiseChisel.
+The metadata (that is stored in the output) can later be used to exactly 
reproduce/understand your result, even if you have lost/forgot the command you 
used to create the file.
+This feature is present in all of Gnuastro's programs, not just NoiseChisel.
 
 @cindex DS9
 @cindex GNOME
 @cindex SAO DS9
-Let's continue with the extensions in NoiseChisel's output that contain a
-dataset by visually inspecting them (here, we'll use SAO DS9). Since the
-file contains multiple related extensions, the easiest way to view all of
-them in DS9 is to open the file as a ``Multi-extension data cube'' with the
-@option{-mecube} option as shown below@footnote{You can configure your
-graphic user interface to open DS9 in multi-extension cube mode by default
-when using the GUI (double clicking on the file). If your graphic user
-interface is GNOME (another GNU software, it is most common in GNU/Linux
-operating systems), a full description is given in @ref{Viewing
-multiextension FITS images}}.
+Let's continue with the extensions in NoiseChisel's output that contain a 
dataset by visually inspecting them (here, we'll use SAO DS9).
+Since the file contains multiple related extensions, the easiest way to view 
all of them in DS9 is to open the file as a ``Multi-extension data cube'' with 
the @option{-mecube} option as shown below@footnote{You can configure your 
graphic user interface to open DS9 in multi-extension cube mode by default when 
using the GUI (double clicking on the file).
+If your graphic user interface is GNOME (another GNU software, it is most 
common in GNU/Linux operating systems), a full description is given in 
@ref{Viewing multiextension FITS images}}.
 
 @example
 $ ds9 -mecube xdf-f160w_detected.fits -zscale -zoom to fit
 @end example
 
-A ``cube'' window opens along with DS9's main window. The buttons and
-horizontal scroll bar in this small new window can be used to navigate
-between the extensions. In this mode, all DS9's settings (for example zoom
-or color-bar) will be identical between the extensions. Try zooming into to
-one part and flipping through the extensions to see how the galaxies were
-detected along with the Sky and Sky standard deviation values for that
-region. Just have in mind that NoiseChisel's job is @emph{only} detection
-(separating signal from noise), We'll do segmentation on this result later
-to find the individual galaxies/peaks over the detected pixels.
+A ``cube'' window opens along with DS9's main window.
+The buttons and horizontal scroll bar in this small new window can be used to 
navigate between the extensions.
+In this mode, all DS9's settings (for example zoom or color-bar) will be 
identical between the extensions.
+Try zooming into to one part and flipping through the extensions to see how 
the galaxies were detected along with the Sky and Sky standard deviation values 
for that region.
+Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We'll do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
 
-Each HDU/extension in a FITS file is an independent dataset (image or
-table) which you can delete from the FITS file, or copy/cut to another
-file. For example, with the command below, you can copy NoiseChisel's
-@code{DETECTIONS} HDU/extension to another file:
+Each HDU/extension in a FITS file is an independent dataset (image or table) 
which you can delete from the FITS file, or copy/cut to another file.
+For example, with the command below, you can copy NoiseChisel's 
@code{DETECTIONS} HDU/extension to another file:
 
 @example
 $ astfits xdf-f160w_detected.fits --copy=DETECTIONS -odetections.fits
 @end example
 
-There are similar options to conveniently cut (@option{--cut}, copy, then
-remove from the input) or delete (@option{--remove}) HDUs from a FITS file
-also. See @ref{HDU manipulation} for more.
+There are similar options to conveniently cut (@option{--cut}, copy, then 
remove from the input) or delete (@option{--remove}) HDUs from a FITS file also.
+See @ref{HDU manipulation} for more.
 
 
 
 @node NoiseChisel optimization for detection, NoiseChisel optimization for 
storage, Multiextension FITS files NoiseChisel's output, General program usage 
tutorial
 @subsection NoiseChisel optimization for detection
-In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel
-and reviewed NoiseChisel's output format. Now that you have a better
-feeling for multi-extension FITS files, let's optimize NoiseChisel for this
-particular dataset.
-
-One good way to see if you have missed any signal (small galaxies, or the
-wings of brighter galaxies) is to mask all the detected pixels and inspect
-the noise pixels. For this, you can use Gnuastro's Arithmetic program (in
-particular its @code{where} operator, see @ref{Arithmetic operators}). The
-command below will produce @file{mask-det.fits}. In it, all the pixels in
-the @code{INPUT-NO-SKY} extension that are flagged 1 in the
-@code{DETECTIONS} extension (dominated by signal, not noise) will be set to
-NaN.
-
-Since the various extensions are in the same file, for each dataset we need
-the file and extension name. To make the command easier to
-read/write/understand, let's use shell variables: `@code{in}' will be used
-for the Sky-subtracted input image and `@code{det}' will be used for the
-detection map. Recall that a shell variable's value can be retrieved by
-adding a @code{$} before its name, also note that the double quotations are
-necessary when we have white-space characters in a variable name (like this
-case).
+In @ref{Multiextension FITS files NoiseChisel's output}, we ran NoiseChisel 
and reviewed NoiseChisel's output format.
+Now that you have a better feeling for multi-extension FITS files, let's 
optimize NoiseChisel for this particular dataset.
+
+One good way to see if you have missed any signal (small galaxies, or the 
wings of brighter galaxies) is to mask all the detected pixels and inspect the 
noise pixels.
+For this, you can use Gnuastro's Arithmetic program (in particular its 
@code{where} operator, see @ref{Arithmetic operators}).
+The command below will produce @file{mask-det.fits}.
+In it, all the pixels in the @code{INPUT-NO-SKY} extension that are flagged 1 
in the @code{DETECTIONS} extension (dominated by signal, not noise) will be set 
to NaN.
+
+Since the various extensions are in the same file, for each dataset we need 
the file and extension name.
+To make the command easier to read/write/understand, let's use shell 
variables: `@code{in}' will be used for the Sky-subtracted input image and 
`@code{det}' will be used for the detection map.
+Recall that a shell variable's value can be retrieved by adding a @code{$} 
before its name, also note that the double quotations are necessary when we 
have white-space characters in a variable name (like this case).
 
 @example
 $ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
@@ -3440,187 +2685,130 @@ $ astarithmetic $in $det nan where 
--output=mask-det.fits
 @end example
 
 @noindent
-To invert the result (only keep the detected pixels), you can flip the
-detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after
-the second @code{$det}:
+To invert the result (only keep the detected pixels), you can flip the 
detection map (from 0 to 1 and vice-versa) by adding a `@code{not}' after the 
second @code{$det}:
 
 @example
 $ astarithmetic $in $det not nan where --output=mask-sky.fits
 @end example
 
-Looking again at the detected pixels, we see that there are thin
-connections between many of the smaller objects or extending from larger
-objects. This shows that we have dug in too deep, and that we are following
-correlated noise.
-
-Correlated noise is created when we warp datasets from individual exposures
-(that are each slightly offset compared to each other) into the same pixel
-grid, then add them to form the final result. Because it mixes nearby pixel
-values, correlated noise is a form of convolution and it smooths the
-image. In terms of the number of exposures (and thus correlated noise), the
-XDF dataset is by no means an ordinary dataset. It is the result of warping
-and adding roughly 80 separate exposures which can create strong correlated
-noise/smoothing. In common surveys the number of exposures is usually 10 or
-less.
-
-Let's tweak NoiseChisel's configuration a little to get a better result on
-this dataset. Don't forget that ``@emph{Good statistical analysis is not a
-purely routine matter, and generally calls for more than one pass through
-the computer}'' (Anscombe 1973, see @ref{Science and its tools}). A good
-scientist must have a good understanding of her tools to make a meaningful
-analysis. So don't hesitate in playing with the default configuration and
-reviewing the manual when you have a new dataset in front of you. Robust
-data analysis is an art, therefore a good scientist must first be a good
-artist.
-
-NoiseChisel can produce ``Check images'' to help you visualize and inspect
-how each step is done. You can see all the check images it can produce with
-this command.
+Looking again at the detected pixels, we see that there are thin connections 
between many of the smaller objects or extending from larger objects.
+This shows that we have dug in too deep, and that we are following correlated 
noise.
+
+Correlated noise is created when we warp datasets from individual exposures 
(that are each slightly offset compared to each other) into the same pixel 
grid, then add them to form the final result.
+Because it mixes nearby pixel values, correlated noise is a form of 
convolution and it smooths the image.
+In terms of the number of exposures (and thus correlated noise), the XDF 
dataset is by no means an ordinary dataset.
+It is the result of warping and adding roughly 80 separate exposures which can 
create strong correlated noise/smoothing.
+In common surveys the number of exposures is usually 10 or less.
+
+Let's tweak NoiseChisel's configuration a little to get a better result on 
this dataset.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
+
+NoiseChisel can produce ``Check images'' to help you visualize and inspect how 
each step is done.
+You can see all the check images it can produce with this command.
 
 @example
 $ astnoisechisel --help | grep check
 @end example
 
-Let's check the overall detection process to get a better feeling of what
-NoiseChisel is doing with the following command. To learn the details of
-NoiseChisel in more detail, please see
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}. Also
-see @ref{NoiseChisel changes after publication}.
+Let's check the overall detection process to get a better feeling of what 
NoiseChisel is doing with the following command.
+To learn the details of NoiseChisel in more detail, please see 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+Also see @ref{NoiseChisel changes after publication}.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
 @end example
 
-The check images/tables are also multi-extension FITS files.  As you saw
-from the command above, when check datasets are requested, NoiseChisel
-won't go to the end. It will abort as soon as all the extensions of the
-check image are ready. Please list the extensions of the output with
-@command{astfits} and then opening it with @command{ds9} as we done
-above. If you have read the paper, you will see why there are so many
-extensions in the check image.
+The check images/tables are also multi-extension FITS files.
+As you saw from the command above, when check datasets are requested, 
NoiseChisel won't go to the end.
+It will abort as soon as all the extensions of the check image are ready.
+Please list the extensions of the output with @command{astfits} and then 
opening it with @command{ds9} as we done above.
+If you have read the paper, you will see why there are so many extensions in 
the check image.
 
 @example
 $ astfits xdf-f160w_detcheck.fits
 $ ds9 -mecube xdf-f160w_detcheck.fits -zscale -zoom to fit
 @end example
 
-In order to understand the parameters and their biases (especially as you
-are starting to use Gnuastro, or running it a new dataset), it is
-@emph{strongly} encouraged to play with the different parameters and use
-the respective check images to see which step is affected by your changes
-and how, for example see @ref{Detecting large extended targets}.
+In order to understand the parameters and their biases (especially as you are 
starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} 
encouraged to play with the different parameters and use the respective check 
images to see which step is affected by your changes and how, for example see 
@ref{Detecting large extended targets}.
 
 @cindex FWHM
-The @code{OPENED_AND_LABELED} extension shows the initial detection step of
-NoiseChisel. We see these thin connections between smaller points are
-already present here (a relatively early stage in the processing). Such
-connections at the lowest surface brightness limits usually occur when the
-dataset is too smoothed. Because of correlated noise, the dataset is
-already artificially smoothed, therefore further smoothing it with the
-default kernel may be the problem. One solution is thus to use a sharper
-kernel (NoiseChisel's first step in its processing).
-
-By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM)
-of 2 pixels. We can use Gnuastro's MakeProfiles to build a kernel with FWHM
-of 1.5 pixel (truncated at 5 times the FWHM, like the default) using the
-following command. MakeProfiles is a powerful tool to build any number of
-mock profiles on one image or independently, to learn more of its features
-and capabilities, see @ref{MakeProfiles}.
+The @code{OPENED_AND_LABELED} extension shows the initial detection step of 
NoiseChisel.
+We see these thin connections between smaller points are already present here 
(a relatively early stage in the processing).
+Such connections at the lowest surface brightness limits usually occur when 
the dataset is too smoothed.
+Because of correlated noise, the dataset is already artificially smoothed, 
therefore further smoothing it with the default kernel may be the problem.
+One solution is thus to use a sharper kernel (NoiseChisel's first step in its 
processing).
+
+By default NoiseChisel uses a Gaussian with full-width-half-maximum (FWHM) of 
2 pixels.
+We can use Gnuastro's MakeProfiles to build a kernel with FWHM of 1.5 pixel 
(truncated at 5 times the FWHM, like the default) using the following command.
+MakeProfiles is a powerful tool to build any number of mock profiles on one 
image or independently, to learn more of its features and capabilities, see 
@ref{MakeProfiles}.
 
 @example
 $ astmkprof --kernel=gaussian,1.5,5 --oversample=1
 @end example
 
 @noindent
-Please open the output @file{kernel.fits} and have a look (it is very small
-and sharp). We can now tell NoiseChisel to use this instead of the default
-kernel with the following command (we'll keep checking the detection steps)
+Please open the output @file{kernel.fits} and have a look (it is very small 
and sharp).
+We can now tell NoiseChisel to use this instead of the default kernel with the 
following command (we'll keep checking the detection steps)
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                  --checkdetection
 @end example
 
-Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin
-connections between smaller peaks has now significantly decreased. Going
-two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can see
-that during the process of finding false pseudo-detections, too many holes
-have been filled: do you see how the many of the brighter galaxies are
-connected? At this stage all holes are filled, irrespective of their size.
+Looking at the @code{OPENED_AND_LABELED} extension, we see that the thin 
connections between smaller peaks has now significantly decreased.
+Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can 
see that during the process of finding false pseudo-detections, too many holes 
have been filled: do you see how the many of the brighter galaxies are 
connected? At this stage all holes are filled, irrespective of their size.
 
-Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you
-can see that there aren't too many pseudo-detections because of all those
-extended filled holes. If you look closely, you can see the number of
-pseudo-detections in the result NoiseChisel prints (around 5000). This is
-another side-effect of correlated noise. To address it, we should slightly
-increase the pseudo-detection threshold (before changing
-@option{--dthresh}, run with @option{-P} to see the default value):
+Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can 
see that there aren't too many pseudo-detections because of all those extended 
filled holes.
+If you look closely, you can see the number of pseudo-detections in the result 
NoiseChisel prints (around 5000).
+This is another side-effect of correlated noise.
+To address it, we should slightly increase the pseudo-detection threshold 
(before changing @option{--dthresh}, run with @option{-P} to see the default 
value):
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits \
                  --dthresh=0.1 --checkdetection
 @end example
 
-Before visually inspecting the check image, you can already see the effect
-of this change in NoiseChisel's command-line output: notice how the number
-of pseudos has increased to more than 6000. Open the check image now and
-have a look, you can see how the pseudo-detections are distributed much
-more evenly in the image.
+Before visually inspecting the check image, you can already see the effect of 
this change in NoiseChisel's command-line output: notice how the number of 
pseudos has increased to more than 6000.
+Open the check image now and have a look, you can see how the 
pseudo-detections are distributed much more evenly in the image.
 
 @cartouche
 @noindent
-@strong{Maximize the number of pseudo-detections:} For a new noise-pattern
-(different instrument), play with @code{--dthresh} until you get a maximal
-number of pseudo-detections (the total number of pseudo-detections is
-printed on the command-line when you run NoiseChisel).
+@strong{Maximize the number of pseudo-detections:} For a new noise-pattern 
(different instrument), play with @code{--dthresh} until you get a maximal 
number of pseudo-detections (the total number of pseudo-detections is printed 
on the command-line when you run NoiseChisel).
 @end cartouche
 
-The signal-to-noise ratio of pseudo-detections define NoiseChisel's
-reference for removing false detections, so they are very important to get
-right. Let's have a look at their signal-to-noise distribution with
-@option{--checksn}.
+The signal-to-noise ratio of pseudo-detections define NoiseChisel's reference 
for removing false detections, so they are very important to get right.
+Let's have a look at their signal-to-noise distribution with 
@option{--checksn}.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
                  --dthresh=0.1 --checkdetection --checksn
 @end example
 
-The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the
-pseudo-detections over the undetected (sky) regions and those over
-detections. The first column is the pseudo-detection label which you can
-see in the respective@footnote{The first @code{PSEUDOS-FOR-SN} in
-@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the
-undetected regions and the second is for those over detected regions.}
-@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}. You can
-see the table columns with the first command below and get a feeling for
-its distribution with the second command (the two Table and Statistics
-programs will be discussed later in the tutorial)
+The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the 
pseudo-detections over the undetected (sky) regions and those over detections.
+The first column is the pseudo-detection label which you can see in the 
respective@footnote{The first @code{PSEUDOS-FOR-SN} in 
@file{xdf-f160w_detsn.fits} is for the pseudo-detections over the undetected 
regions and the second is for those over detected regions.} 
@code{PSEUDOS-FOR-SN} extension of @file{xdf-f160w_detcheck.fits}.
+You can see the table columns with the first command below and get a feeling 
for its distribution with the second command (the two Table and Statistics 
programs will be discussed later in the tutorial)
 
 @example
 $ asttable xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN
 $ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2
 @end example
 
-The correlated noise is again visible in this pseudo-detection
-signal-to-noise distribution: it is highly skewed. A small change in the
-quantile will translate into a big change in the S/N value. For example see
-the difference between the three 0.99, 0.95 and 0.90 quantiles with this
-command:
+The correlated noise is again visible in this pseudo-detection signal-to-noise 
distribution: it is highly skewed.
+A small change in the quantile will translate into a big change in the S/N 
value.
+For example see the difference between the three 0.99, 0.95 and 0.90 quantiles 
with this command:
 
 @example
 $ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2      \
                 --quantile=0.99 --quantile=0.95 --quantile=0.90
 @end example
 
-If you run NoiseChisel with @option{-P}, you'll see the default
-signal-to-noise quantile @option{--snquant} is 0.99. In effect with this
-option you specify the purity level you want (contamination by false
-detections). With the @command{aststatistics} command above, you see that a
-small number of extra false detections (impurity) in the final result
-causes a big change in completeness (you can detect more lower
-signal-to-noise true detections). So let's loosen-up our desired purity
-level, remove the check-image options, and then mask the detected pixels
-like before to see if we have missed anything.
+If you run NoiseChisel with @option{-P}, you'll see the default 
signal-to-noise quantile @option{--snquant} is 0.99.
+In effect with this option you specify the purity level you want 
(contamination by false detections).
+With the @command{aststatistics} command above, you see that a small number of 
extra false detections (impurity) in the final result causes a big change in 
completeness (you can detect more lower signal-to-noise true detections).
+So let's loosen-up our desired purity level, remove the check-image options, 
and then mask the detected pixels like before to see if we have missed anything.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
@@ -3630,34 +2818,26 @@ $ det="xdf-f160w_detected.fits -hDETECTIONS"
 $ astarithmetic $in $det nan where --output=mask-det.fits
 @end example
 
-Overall it seems good, but if you play a little with the color-bar and look
-closer in the noise, you'll see a few very sharp, but faint, objects that
-have not been detected. This only happens for under-sampled datasets like
-HST (where the pixel size is larger than the point spread function
-FWHM). So this won't happen on ground-based images. Because of this, sharp
-and faint objects will be very small and eroded too easily during
-NoiseChisel's erosion step.
+Overall it seems good, but if you play a little with the color-bar and look 
closer in the noise, you'll see a few very sharp, but faint, objects that have 
not been detected.
+This only happens for under-sampled datasets like HST (where the pixel size is 
larger than the point spread function FWHM).
+So this won't happen on ground-based images.
+Because of this, sharp and faint objects will be very small and eroded too 
easily during NoiseChisel's erosion step.
 
-To address this problem of sharp objects, we can use NoiseChisel's
-@option{--noerodequant} option. All pixels above this quantile will not be
-eroded, thus allowing us to preserve faint and sharp objects. Check its
-default value, then run NoiseChisel like below and make the mask again. You
-will see many of those sharp objects are now detected.
+To address this problem of sharp objects, we can use NoiseChisel's 
@option{--noerodequant} option.
+All pixels above this quantile will not be eroded, thus allowing us to 
preserve faint and sharp objects.
+Check its default value, then run NoiseChisel like below and make the mask 
again.
+You will see many of those sharp objects are now detected.
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits     \
                  --noerodequant=0.95 --dthresh=0.1 --snquant=0.95
 @end example
 
-This seems to be fine and we can continue with our analysis. To avoid
-having to write these options on every call to NoiseChisel, we'll just make
-a configuration file in a visible @file{config} directory. Then we'll
-define the hidden @file{.gnuastro} directory (that all Gnuastro's programs
-will look into for configuration files) as a symbolic link to the
-@file{config} directory. Finally, we'll write the finalized values of the
-options into NoiseChisel's standard configuration file within that
-directory. We'll also put the kernel in a separate directory to keep the
-top directory clean of any files we later need.
+This seems to be fine and we can continue with our analysis.
+To avoid having to write these options on every call to NoiseChisel, we'll 
just make a configuration file in a visible @file{config} directory.
+Then we'll define the hidden @file{.gnuastro} directory (that all Gnuastro's 
programs will look into for configuration files) as a symbolic link to the 
@file{config} directory.
+Finally, we'll write the finalized values of the options into NoiseChisel's 
standard configuration file within that directory.
+We'll also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
 
 @example
 $ mkdir kernel config
@@ -3683,93 +2863,68 @@ $ astnoisechisel flat-ir/xdf-f105w.fits 
--output=nc/xdf-f105w.fits
 @node NoiseChisel optimization for storage, Segmentation and making a catalog, 
NoiseChisel optimization for detection, General program usage tutorial
 @subsection NoiseChisel optimization for storage
 
-As we showed before (in @ref{Multiextension FITS files NoiseChisel's
-output}), NoiseChisel's output is a multi-extension FITS file with several
-images the same size as the input. As the input datasets get larger this
-output can become hard to manage and waste a lot of storage
-space. Fortunately there is a solution to this problem (which is also
-useful for Segment's outputs). But first, let's have a look at the volume
-of NoiseChisel's output from @ref{NoiseChisel optimization for detection}
-(fast answer, its larger than 100 mega-bytes):
+As we showed before (in @ref{Multiextension FITS files NoiseChisel's output}), 
NoiseChisel's output is a multi-extension FITS file with several images the 
same size as the input.
+As the input datasets get larger this output can become hard to manage and 
waste a lot of storage space.
+Fortunately there is a solution to this problem (which is also useful for 
Segment's outputs).
+But first, let's have a look at the volume of NoiseChisel's output from 
@ref{NoiseChisel optimization for detection} (fast answer, its larger than 100 
mega-bytes):
 
 @example
 $ ls -lh nc/xdf-f160w.fits
 @end example
 
-Two options can drastically decrease NoiseChisel's output file size: 1)
-With the @option{--rawoutput} option, NoiseChisel won't create a
-Sky-subtracted input. After all, it is redundant: you can always generate
-it by subtracting the Sky from the input image (which you have in your
-database) using the Arithmetic program. 2) With the
-@option{--oneelempertile}, you can tell NoiseChisel to store its Sky and
-Sky standard deviation results with one pixel per tile (instead of many
-pixels per tile).
+Two options can drastically decrease NoiseChisel's output file size: 1) With 
the @option{--rawoutput} option, NoiseChisel won't create a Sky-subtracted 
input.
+After all, it is redundant: you can always generate it by subtracting the Sky 
from the input image (which you have in your database) using the Arithmetic 
program.
+2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its 
Sky and Sky standard deviation results with one pixel per tile (instead of many 
pixels per tile).
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --oneelempertile --rawoutput
 @end example
 
 @noindent
-The output is now just under 8 mega byes! But you can even be more
-efficient in space by compressing it. Try the command below to see how
-NoiseChisel's output has now shrunk to about 250 kilo-byes while keeping
-all the necessary information as the original 100 mega-byte output.
+The output is now just under 8 mega byes! But you can even be more efficient 
in space by compressing it.
+Try the command below to see how NoiseChisel's output has now shrunk to about 
250 kilo-byes while keeping all the necessary information as the original 100 
mega-byte output.
 
 @example
 $ gzip --best xdf-f160w_detected.fits
 $ ls -lh xdf-f160w_detected.fits.gz
 @end example
 
-We can get this wonderful level of compression because NoiseChisel's output
-is binary with only two values: 0 and 1. Compression algorithms are highly
-optimized in such scenarios.
+We can get this wonderful level of compression because NoiseChisel's output is 
binary with only two values: 0 and 1.
+Compression algorithms are highly optimized in such scenarios.
 
-You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed
-it to any of Gnuastro's programs without having to uncompress
-it. Higher-level programs that take NoiseChisel's output can also deal with
-this compressed image where the Sky and its Standard deviation are one
-pixel-per-tile.
+You can open @file{xdf-f160w_detected.fits.gz} directly in SAO DS9 or feed it 
to any of Gnuastro's programs without having to uncompress it.
+Higher-level programs that take NoiseChisel's output can also deal with this 
compressed image where the Sky and its Standard deviation are one 
pixel-per-tile.
 
 
 
 @node Segmentation and making a catalog, Working with catalogs estimating 
colors, NoiseChisel optimization for storage, General program usage tutorial
 @subsection Segmentation and making a catalog
-The main output of NoiseChisel is the binary detection map
-(@code{DETECTIONS} extension, see @ref{NoiseChisel optimization for
-detection}). which only has two values of 1 or 0. This is useful when
-studying the noise, but hardly of any use when you actually want to study
-the targets/galaxies in the image, especially in such a deep field where
-the detection map of almost everything is connected. To find the galaxies
-over the detections, we'll use Gnuastro's @ref{Segment} program:
+The main output of NoiseChisel is the binary detection map (@code{DETECTIONS} 
extension, see @ref{NoiseChisel optimization for detection}).
+which only has two values of 1 or 0.
+This is useful when studying the noise, but hardly of any use when you 
actually want to study the targets/galaxies in the image, especially in such a 
deep field where the detection map of almost everything is connected.
+To find the galaxies over the detections, we'll use Gnuastro's @ref{Segment} 
program:
 
 @example
 $ mkdir seg
 $ astsegment nc/xdf-f160w.fits -oseg/xdf-f160w.fits
 @end example
 
-Segment's operation is very much like NoiseChisel (in fact, prior to
-version 0.6, it was part of NoiseChisel). For example the output is a
-multi-extension FITS file, it has check images and uses the undetected
-regions as a reference. Please have a look at Segment's multi-extension
-output with @command{ds9} to get a good feeling of what it has done.
+Segment's operation is very much like NoiseChisel (in fact, prior to version 
0.6, it was part of NoiseChisel).
+For example the output is a multi-extension FITS file, it has check images and 
uses the undetected regions as a reference.
+Please have a look at Segment's multi-extension output with @command{ds9} to 
get a good feeling of what it has done.
 
 @example
 $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit
 @end example
 
-Like NoiseChisel, the first extension is the input. The @code{CLUMPS}
-extension shows the true ``clumps'' with values that are @mymath{\ge1}, and
-the diffuse regions labeled as @mymath{-1}. In the @code{OBJECTS}
-extension, we see that the large detections of NoiseChisel (that may have
-contained many galaxies) are now broken up into separate labels. see
-@ref{Segment} for more.
+Like NoiseChisel, the first extension is the input.
+The @code{CLUMPS} extension shows the true ``clumps'' with values that are 
@mymath{\ge1}, and the diffuse regions labeled as @mymath{-1}.
+In the @code{OBJECTS} extension, we see that the large detections of 
NoiseChisel (that may have contained many galaxies) are now broken up into 
separate labels.
+See @ref{Segment} for more.
 
-Having localized the regions of interest in the dataset, we are ready to do
-measurements on them with @ref{MakeCatalog}. Besides the IDs, we want to
-measure (in this order) the Right Ascension (with @option{--ra}),
-Declination (@option{--dec}), magnitude (@option{--magnitude}), and
-signal-to-noise ratio (@option{--sn}) of the objects and clumps. The
-following command will make these measurements on Segment's F160W output:
+Having localized the regions of interest in the dataset, we are ready to do 
measurements on them with @ref{MakeCatalog}.
+Besides the IDs, we want to measure (in this order) the Right Ascension (with 
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), 
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
+The following command will make these measurements on Segment's F160W output:
 
 @c Keep the `--zeropoint' on a single line, because later, we'll add
 @c `--valuesfile' in that line also, and it would be more clear if both
@@ -3782,30 +2937,18 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec 
--magnitude --sn \
 @end example
 
 @noindent
-From the printed statements on the command-line, you see that MakeCatalog
-read all the extensions in Segment's output for the various measurements it
-needed.
+From the printed statements on the command-line, you see that MakeCatalog read 
all the extensions in Segment's output for the various measurements it needed.
 
-To calculate colors, we also need magnitude measurements on the F105W
-filter. However, the galaxy properties might differ between the filters
-(which is the whole purpose behind measuring colors). Also, the noise
-properties and depth of the datasets differ. Therefore, if we simply follow
-the same Segment and MakeCatalog calls above for the F105W filter, we are
-going to get a different number of objects and clumps. Matching the two
-catalogs is possible (for example with @ref{Match}), but the fact that the
-measurements will be done on different pixels, can bias the result. Since
-the Point spread function (PSF) of both images is very similar, an accurate
-color calculation can only be done when magnitudes are measured from the
-same pixels on both images.
+To calculate colors, we also need magnitude measurements on the F105W filter.
+However, the galaxy properties might differ between the filters (which is the 
whole purpose behind measuring colors).
+Also, the noise properties and depth of the datasets differ.
+Therefore, if we simply follow the same Segment and MakeCatalog calls above 
for the F105W filter, we are going to get a different number of objects and 
clumps.
+Matching the two catalogs is possible (for example with @ref{Match}), but the 
fact that the measurements will be done on different pixels, can bias the 
result.
+Since the Point spread function (PSF) of both images is very similar, an 
accurate color calculation can only be done when magnitudes are measured from 
the same pixels on both images.
 
-The F160W image is deeper, thus providing better detection/segmentation,
-and redder, thus observing smaller/older stars and representing more of the
-mass in the galaxies. To generate the F105W catalog, we will thus use the
-pixel labels generated on the F160W filter, but do the measurements on the
-F105W filter (using MakeCatalog's @option{--valuesfile} option). Notice how
-the only difference between this call to MakeCatalog and the previous one
-is @option{--valuesfile}, the value given to @code{--zeropoint} and the
-output name.
+The F160W image is deeper, thus providing better detection/segmentation, and 
redder, thus observing smaller/older stars and representing more of the mass in 
the galaxies.
+To generate the F105W catalog, we will thus use the pixel labels generated on 
the F160W filter, but do the measurements on the F105W filter (using 
MakeCatalog's @option{--valuesfile} option).
+Notice how the only difference between this call to MakeCatalog and the 
previous one is @option{--valuesfile}, the value given to @code{--zeropoint} 
and the output name.
 
 @example
 $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
@@ -3813,22 +2956,15 @@ $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec 
--magnitude --sn \
                --clumpscat --output=cat/xdf-f105w.fits
 @end example
 
-Look into what MakeCatalog printed on the command-line. You can see that
-(as requested) the object and clump labels were taken from the respective
-extensions in @file{seg/xdf-f160w.fits}, while the values and Sky standard
-deviation were done on @file{nc/xdf-f105w.fits}.
+Look into what MakeCatalog printed on the command-line.
+You can see that (as requested) the object and clump labels were taken from 
the respective extensions in @file{seg/xdf-f160w.fits}, while the values and 
Sky standard deviation were done on @file{nc/xdf-f105w.fits}.
 
-Since we used the same labeled image on both filters, the number of rows in
-both catalogs are the same. The clumps are not affected by the
-hard-to-deblend and low signal-to-noise diffuse regions, they are more
-robust for calculating the colors (compared to objects). Therefore from
-this step onward, we'll continue with clumps.
+Since we used the same labeled image on both filters, the number of rows in 
both catalogs are the same.
+The clumps are not affected by the hard-to-deblend and low signal-to-noise 
diffuse regions, they are more robust for calculating the colors (compared to 
objects).
+Therefore from this step onward, we'll continue with clumps.
 
-Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in
-the FITS headers, or lines starting with @code{#} in plain text) contain
-some important information about the input datasets and other useful info
-(for example pixel area or per-pixel surface brightness limit). You can see
-them with this command:
+Finally, the comments in MakeCatalog's output (@code{COMMENT} keywords in the 
FITS headers, or lines starting with @code{#} in plain text) contain some 
important information about the input datasets and other useful info (for 
example pixel area or per-pixel surface brightness limit).
+You can see them with this command:
 
 @example
 $ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
@@ -3837,23 +2973,18 @@ $ astfits cat/xdf-f160w.fits -h1 | grep COMMENT
 
 @node Working with catalogs estimating colors, Aperture photometry, 
Segmentation and making a catalog, General program usage tutorial
 @subsection Working with catalogs (estimating colors)
-The output of the MakeCatalog command above is a FITS table (see
-@ref{Segmentation and making a catalog}). The two clump and object catalogs
-are available in the two extensions of the single FITS
-file@footnote{MakeCatalog can also output plain text tables. However, in
-the plain text format you can only have one table per file. Therefore, if
-you also request measurements on clumps, two plain text tables will be
-created (suffixed with @file{_o.txt} and @file{_c.txt}).}. Let's see the
-extensions and their basic properties with the Fits program:
+The output of the MakeCatalog command above is a FITS table (see 
@ref{Segmentation and making a catalog}).
+The two clump and object catalogs are available in the two extensions of the 
single FITS file@footnote{MakeCatalog can also output plain text tables.
+However, in the plain text format you can only have one table per file.
+Therefore, if you also request measurements on clumps, two plain text tables 
will be created (suffixed with @file{_o.txt} and @file{_c.txt}).}.
+Let's see the extensions and their basic properties with the Fits program:
 
 @example
 $ astfits  cat/xdf-f160w.fits              # Extension information
 @end example
 
-Now, let's inspect the table in each extension with Gnuastro's Table
-program (see @ref{Table}). Note that we could have used @option{-hOBJECTS}
-and @option{-hCLUMPS} instead of @option{-h1} and @option{-h2}
-respectively.
+Now, let's inspect the table in each extension with Gnuastro's Table program 
(see @ref{Table}).
+Note that we could have used @option{-hOBJECTS} and @option{-hCLUMPS} instead 
of @option{-h1} and @option{-h2} respectively.
 
 @example
 $ asttable cat/xdf-f160w.fits -h1 --info   # Objects catalog info.
@@ -3862,16 +2993,11 @@ $ asttable cat/xdf-f160w.fits -h2 -i       # Clumps 
catalog info.
 $ asttable cat/xdf-f160w.fits -h2          # Clumps catalog columns.
 @end example
 
-As you see above, when given a specific table (file name and extension),
-Table will print the full contents of all the columns. To see the basic
-metadata about each column (for example name, units and comments), simply
-append a @option{--info} (or @option{-i}) to the command.
+As you see above, when given a specific table (file name and extension), Table 
will print the full contents of all the columns.
+To see the basic metadata about each column (for example name, units and 
comments), simply append a @option{--info} (or @option{-i}) to the command.
 
-To print the contents of special column(s), just specify the column
-number(s) (counting from @code{1}) or the column name(s) (if they have
-one). For example, if you just want the magnitude and signal-to-noise ratio
-of the clumps (in @option{-h2}), you can get it with any of the following
-commands
+To print the contents of special column(s), just specify the column number(s) 
(counting from @code{1}) or the column name(s) (if they have one).
+For example, if you just want the magnitude and signal-to-noise ratio of the 
clumps (in @option{-h2}), you can get it with any of the following commands
 
 @example
 $ asttable cat/xdf-f160w.fits -h2 -c5,6
@@ -3880,41 +3006,25 @@ $ asttable cat/xdf-f160w.fits -h2 -c5         -c6
 $ asttable cat/xdf-f160w.fits -h2 -cMAGNITUDE -cSN
 @end example
 
-Using column names instead of numbers has many advantages: 1) you don't
-have to worry about the order of columns in the table. 2) It acts as a
-documentation in the script. Column meta-data (including a name) aren't
-just limited to FITS tables and can also be used in plain text tables, see
-@ref{Gnuastro text table format}.
-
-We can finally calculate the colors of the objects from these two
-datasets. If you inspect the contents of the two catalogs, you'll notice
-that because they were both derived from the same segmentation maps, the
-rows are ordered identically (they correspond to the same object/clump in
-both filters). But to be generic (usable even when the rows aren't ordered
-similarly) and display another useful program in Gnuastro, we'll use
-@ref{Match}.
-
-As the name suggests, Gnuastro's Match program will match rows based on
-distance (or aperture in 2D) in one (or two) columns. In the command below,
-the options relating to each catalog are placed under it for easy
-understanding. You give Match two catalogs (from the two different filters
-we derived above) as argument, and the HDUs containing them (if they are
-FITS files) with the @option{--hdu} and @option{--hdu2} options. The
-@option{--ccol1} and @option{--ccol2} options specify the
-coordinate-columns which should be matched with which in the two
-catalogs. With @option{--aperture} you specify the acceptable error (radius
-in 2D), in the same units as the columns (see below for why we have
-requested an aperture of 0.35 arcseconds, or less than 6 HST pixels).
-
-The @option{--outcols} of Match is a very convenient feature in Match: you
-can use it to specify which columns from the two catalogs you want in the
-output (merge two input catalogs into one). If the first character is an
-`@key{a}', the respective matched column (number or name, similar to Table
-above) in the first catalog will be written in the output table. When the
-first character is a `@key{b}', the respective column from the second
-catalog will be written in the output. Also, if the first character is
-followed by @code{_all}, then all the columns from the respective catalog
-will be put in the output.
+Using column names instead of numbers has many advantages:
+1) you don't have to worry about the order of columns in the table.
+2) It acts as a documentation in the script.
+Column meta-data (including a name) aren't just limited to FITS tables and can 
also be used in plain text tables, see @ref{Gnuastro text table format}.
+
+We can finally calculate the colors of the objects from these two datasets.
+If you inspect the contents of the two catalogs, you'll notice that because 
they were both derived from the same segmentation maps, the rows are ordered 
identically (they correspond to the same object/clump in both filters).
+But to be generic (usable even when the rows aren't ordered similarly) and 
display another useful program in Gnuastro, we'll use @ref{Match}.
+
+As the name suggests, Gnuastro's Match program will match rows based on 
distance (or aperture in 2D) in one (or two) columns.
+In the command below, the options relating to each catalog are placed under it 
for easy understanding.
+You give Match two catalogs (from the two different filters we derived above) 
as argument, and the HDUs containing them (if they are FITS files) with the 
@option{--hdu} and @option{--hdu2} options.
+The @option{--ccol1} and @option{--ccol2} options specify the 
coordinate-columns which should be matched with which in the two catalogs.
+With @option{--aperture} you specify the acceptable error (radius in 2D), in 
the same units as the columns (see below for why we have requested an aperture 
of 0.35 arcseconds, or less than 6 HST pixels).
+
+The @option{--outcols} of Match is a very convenient feature in Match: you can 
use it to specify which columns from the two catalogs you want in the output 
(merge two input catalogs into one).
+If the first character is an `@key{a}', the respective matched column (number 
or name, similar to Table above) in the first catalog will be written in the 
output table.
+When the first character is a `@key{b}', the respective column from the second 
catalog will be written in the output.
+Also, if the first character is followed by @code{_all}, then all the columns 
from the respective catalog will be put in the output.
 
 @example
 $ astmatch cat/xdf-f160w.fits           cat/xdf-f105w.fits         \
@@ -3931,15 +3041,12 @@ Let's have a look at the columns in the matched catalog:
 $ asttable cat/xdf-f160w-f105w.fits -i
 @end example
 
-Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE}
-and @code{SN} columns. The first is from the F160W filter, the second is
-from the F105W. Right now, you know this. But in one hour, you'll start
-doubting your self: going through your command history, trying to answer
-this question: ``which magnitude corresponds to which filter?''. You should
-never torture your future-self (or colleagues) like this! So, let's rename
-these confusing columns in the matched catalog. The FITS standard for
-tables stores the column names in the @code{TTYPE} header keywords, so
-let's have a look:
+Indeed, its exactly the columns we wanted: there are two @code{MAGNITUDE} and 
@code{SN} columns.
+The first is from the F160W filter, the second is from the F105W.
+Right now, you know this.
+But in one hour, you'll start doubting your self: going through your command 
history, trying to answer this question: ``which magnitude corresponds to which 
filter?''.
+You should never torture your future-self (or colleagues) like this! So, let's 
rename these confusing columns in the matched catalog.
+The FITS standard for tables stores the column names in the @code{TTYPE} 
header keywords, so let's have a look:
 
 @example
 $ astfits cat/xdf-f160w-f105w.fits -h1 | grep TTYPE
@@ -3956,14 +3063,11 @@ $ astfits cat/xdf-f160w-f105w.fits -h1                  
        \
 $ asttable cat/xdf-f160w-f105w.fits -i
 @end example
 
-If you noticed, when running Match, we also asked for a log file
-(@option{--log}). Many Gnuastro programs have this option to provide some
-detailed information on their operation in case you are curious or want to
-debug something. Here, we are using it to justify the value we gave to
-@option{--aperture}. Even though you asked for the output to be written in
-the @file{cat} directory, a listing of the contents of your current
-directory will show you an extra @file{astmatch.fits} file. Let's have a
-look at what columns it contains.
+If you noticed, when running Match, we also asked for a log file 
(@option{--log}).
+Many Gnuastro programs have this option to provide some detailed information 
on their operation in case you are curious or want to debug something.
+Here, we are using it to justify the value we gave to @option{--aperture}.
+Even though you asked for the output to be written in the @file{cat} 
directory, a listing of the contents of your current directory will show you an 
extra @file{astmatch.fits} file.
+Let's have a look at what columns it contains.
 
 @example
 $ ls
@@ -3973,53 +3077,34 @@ $ asttable astmatch.fits -i
 @cindex Flux-weighted
 @cindex SED, Spectral Energy Distribution
 @cindex Spectral Energy Distribution, SED
-The @file{MATCH_DIST} column contains the distance of the matched rows,
-let's have a look at the distribution of values in this column. You might
-be asking yourself ``why should the positions of the two filters differ
-when I gave MakeCatalog the same segmentation map?'' The reason is that the
-central positions are @emph{flux-weighted}. Therefore the
-@option{--valuesfile} dataset you give to MakeCatalog will also affect the
-center measurements@footnote{To only measure the center based on the
-labeled pixels (and ignore the pixel values), you can ask for the columns
-that contain @option{geo} (for geometric) in them. For example
-@option{--geow1} or @option{--geow2} for the RA and Declination (first and
-second world-coordinates).}. Recall that the Spectral Energy Distribution
-(SED) of galaxies is not flat and they have substructure, therefore, they
-can have different shapes/morphologies in different filters.
-
-Gnuastro has a simple program for basic statistical analysis. The command
-below will print some basic information about the distribution (minimum,
-maximum, median, etc), along with a cute little ASCII histogram to
-visually help you understand the distribution on the command-line without
-the need for a graphic user interface. This ASCII histogram can be useful
-when you just want some coarse and general information on the input
-dataset. It is also useful when working on a server (where you may not have
-graphic user interface), and finally, its fast.
+The @file{MATCH_DIST} column contains the distance of the matched rows, let's 
have a look at the distribution of values in this column.
+You might be asking yourself ``why should the positions of the two filters 
differ when I gave MakeCatalog the same segmentation map?'' The reason is that 
the central positions are @emph{flux-weighted}.
+Therefore the @option{--valuesfile} dataset you give to MakeCatalog will also 
affect the center measurements@footnote{To only measure the center based on the 
labeled pixels (and ignore the pixel values), you can ask for the columns that 
contain @option{geo} (for geometric) in them.
+For example @option{--geow1} or @option{--geow2} for the RA and Declination 
(first and second world-coordinates).}.
+Recall that the Spectral Energy Distribution (SED) of galaxies is not flat and 
they have substructure, therefore, they can have different shapes/morphologies 
in different filters.
+
+Gnuastro has a simple program for basic statistical analysis.
+The command below will print some basic information about the distribution 
(minimum, maximum, median, etc), along with a cute little ASCII histogram to 
visually help you understand the distribution on the command-line without the 
need for a graphic user interface.
+This ASCII histogram can be useful when you just want some coarse and general 
information on the input dataset.
+It is also useful when working on a server (where you may not have graphic 
user interface), and finally, its fast.
 
 @example
 $ aststatistics astmatch.fits -cMATCH_DIST
 $ rm astmatch.fits
 @end example
 
-The units of this column are the same as the columns you gave to Match: in
-degrees. You see that while almost all the objects matched very nicely, the
-maximum distance is roughly 0.31 arcseconds. This is why we asked for an
-aperture of 0.35 arcseconds when doing the match.
+The units of this column are the same as the columns you gave to Match: in 
degrees.
+You see that while almost all the objects matched very nicely, the maximum 
distance is roughly 0.31 arcseconds.
+This is why we asked for an aperture of 0.35 arcseconds when doing the match.
 
-Gnuastro's Table program can also be used to measure the colors using the
-command below. As before, the @option{-c1,2} option will tell Table to
-print the first two columns. With the @option{--range=SN_F160W,7,inf} we
-only keep the rows that have a F160W signal-to-noise ratio larger than
-7@footnote{The value of 7 is taken from the clump S/N threshold in F160W
-(where the clumps were defined).}.
+Gnuastro's Table program can also be used to measure the colors using the 
command below.
+As before, the @option{-c1,2} option will tell Table to print the first two 
columns.
+With the @option{--range=SN_F160W,7,inf} we only keep the rows that have a 
F160W signal-to-noise ratio larger than 7@footnote{The value of 7 is taken from 
the clump S/N threshold in F160W (where the clumps were defined).}.
 
-Finally, for estimating the colors, we use Table's column arithmetic
-feature. It uses the same notation as the Arithmetic program (see
-@ref{Reverse polish notation}), with almost all the same operators (see
-@ref{Arithmetic operators}). You can use column arithmetic in any output
-column, just put the value in double quotations and start the value with
-@code{arith} (followed by a space) like below. In column-arithmetic, you
-can identify columns by number or name, see @ref{Column arithmetic}.
+Finally, for estimating the colors, we use Table's column arithmetic feature.
+It uses the same notation as the Arithmetic program (see @ref{Reverse polish 
notation}), with almost all the same operators (see @ref{Arithmetic operators}).
+You can use column arithmetic in any output column, just put the value in 
double quotations and start the value with @code{arith} (followed by a space) 
like below.
+In column-arithmetic, you can identify columns by number or name, see 
@ref{Column arithmetic}.
 
 @example
 $ asttable cat/xdf-f160w-f105w.fits -ocat/f105w-f160w.fits \
@@ -4028,20 +3113,16 @@ $ asttable cat/xdf-f160w-f105w.fits 
-ocat/f105w-f160w.fits \
 @end example
 
 @noindent
-You can inspect the distribution of colors with the Statistics program. But
-first, let's give the color column a proper name.
+You can inspect the distribution of colors with the Statistics program.
+But first, let's give the color column a proper name.
 
 @example
 $ astfits cat/f105w-f160w.fits --update=TTYPE5,COLOR_F105W_F160W
 $ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W
 @end example
 
-You can later use Gnuastro's Statistics program with the
-@option{--histogram} option to build a much more fine-grained histogram as
-a table to feed into your favorite plotting program for a much more
-accurate/appealing plot (for example with PGFPlots in @LaTeX{}). If you
-just want a specific measure, for example the mean, median and standard
-deviation, you can ask for them specifically with this command:
+You can later use Gnuastro's Statistics program with the @option{--histogram} 
option to build a much more fine-grained histogram as a table to feed into your 
favorite plotting program for a much more accurate/appealing plot (for example 
with PGFPlots in @LaTeX{}).
+If you just want a specific measure, for example the mean, median and standard 
deviation, you can ask for them specifically with this command:
 
 @example
 $ aststatistics cat/f105w-f160w.fits -cCOLOR_F105W_F160W \
@@ -4051,20 +3132,15 @@ $ aststatistics cat/f105w-f160w.fits 
-cCOLOR_F105W_F160W \
 
 @node Aperture photometry, Finding reddest clumps and visual inspection, 
Working with catalogs estimating colors, General program usage tutorial
 @subsection Aperture photometry
-Some researchers prefer to have colors in a fixed aperture for all the
-objects. The colors we calculated in @ref{Working with catalogs estimating
-colors} used a different segmentation map for each object. This might not
-satisfy some science cases. To make a catalog from fixed apertures, we
-should make a labeled image which has a fixed label for each aperture. That
-labeled image can be given to MakeCatalog instead of Segment's labeled
-detection image.
+Some researchers prefer to have colors in a fixed aperture for all the objects.
+The colors we calculated in @ref{Working with catalogs estimating colors} used 
a different segmentation map for each object.
+This might not satisfy some science cases.
+To make a catalog from fixed apertures, we should make a labeled image which 
has a fixed label for each aperture.
+That labeled image can be given to MakeCatalog instead of Segment's labeled 
detection image.
 
 @cindex GNU AWK
-To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see
-@ref{MakeProfiles}). We'll first read the clump positions from the F160W
-catalog, then use AWK to set the other parameters of each profile to be a
-fixed circle of radius 5 pixels (recall that we want all apertures to be
-identical in this scenario).
+To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see 
@ref{MakeProfiles}).
+We'll first read the clump positions from the F160W catalog, then use AWK to 
set the other parameters of each profile to be a fixed circle of radius 5 
pixels (recall that we want all apertures to be identical in this scenario).
 
 @example
 $ rm *.fits *.txt
@@ -4073,15 +3149,10 @@ $ asttable cat/xdf-f160w.fits -hCLUMPS -cRA,DEC         
           \
            > apertures.txt
 @end example
 
-We can now feed this catalog into MakeProfiles using the command below to
-build the apertures over the image. The most important option for this
-particular job is @option{--mforflatpix}, it tells MakeProfiles that the
-values in the magnitude column should be used for each pixel of a flat
-profile. Without it, MakeProfiles would build the profiles such that the
-@emph{sum} of the pixels of each profile would have a @emph{magnitude} (in
-log-scale) of the value given in that column (what you would expect when
-simulating a galaxy for example). See @ref{Invoking astmkprof} for details
-on the options.
+We can now feed this catalog into MakeProfiles using the command below to 
build the apertures over the image.
+The most important option for this particular job is @option{--mforflatpix}, 
it tells MakeProfiles that the values in the magnitude column should be used 
for each pixel of a flat profile.
+Without it, MakeProfiles would build the profiles such that the @emph{sum} of 
the pixels of each profile would have a @emph{magnitude} (in log-scale) of the 
value given in that column (what you would expect when simulating a galaxy for 
example).
+See @ref{Invoking astmkprof} for details on the options.
 
 @example
 $ astmkprof apertures.txt --background=flat-ir/xdf-f160w.fits     \
@@ -4089,25 +3160,17 @@ $ astmkprof apertures.txt 
--background=flat-ir/xdf-f160w.fits     \
             --mode=wcs
 @end example
 
-The first thing you might notice in the printed information is that the
-profiles are not built in order. This is because MakeProfiles works in
-parallel, and parallel CPU operations are asynchronous. You can try running
-MakeProfiles with one thread (using @option{--numthreads=1}) to see how
-order is respected in that case.
+The first thing you might notice in the printed information is that the 
profiles are not built in order.
+This is because MakeProfiles works in parallel, and parallel CPU operations 
are asynchronous.
+You can try running MakeProfiles with one thread (using 
@option{--numthreads=1}) to see how order is respected in that case.
 
-Open the output @file{apertures.fits} file and see the result. Where the
-apertures overlap, you will notice that one label has replaced the other
-(because of the @option{--replace} option). In the future, MakeCatalog will
-be able to work with overlapping labels, but currently it doesn't. If you
-are interested, please join us in completing Gnuastro with added
-improvements like this (see task 14750
-@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
+Open the output @file{apertures.fits} file and see the result.
+Where the apertures overlap, you will notice that one label has replaced the 
other (because of the @option{--replace} option).
+In the future, MakeCatalog will be able to work with overlapping labels, but 
currently it doesn't.
+If you are interested, please join us in completing Gnuastro with added 
improvements like this (see task 14750 
@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
 
-We can now feed the @file{apertures.fits} labeled image into MakeCatalog
-instead of Segment's output as shown below. In comparison with the previous
-MakeCatalog call, you will notice that there is no more
-@option{--clumpscat} option, since each aperture is treated as a separate
-``object'' here.
+We can now feed the @file{apertures.fits} labeled image into MakeCatalog 
instead of Segment's output as shown below.
+In comparison with the previous MakeCatalog call, you will notice that there 
is no more @option{--clumpscat} option, since each aperture is treated as a 
separate ``object'' here.
 
 @example
 $ astmkcatalog apertures.fits -h1 --zeropoint=26.27        \
@@ -4116,10 +3179,8 @@ $ astmkcatalog apertures.fits -h1 --zeropoint=26.27      
  \
                --output=cat/xdf-f105w-aper.fits
 @end example
 
-This catalog has the same number of rows as the catalog produced from
-clumps in @ref{Working with catalogs estimating colors}. Therefore similar
-to how we found colors, you can compare the aperture and clump magnitudes
-for example.
+This catalog has the same number of rows as the catalog produced from clumps 
in @ref{Working with catalogs estimating colors}.
+Therefore similar to how we found colors, you can compare the aperture and 
clump magnitudes for example.
 
 You can also change the filter name and zeropoint magnitudes and run this
 command again to have the fixed aperture magnitude in the F160W filter and
@@ -4129,23 +3190,17 @@ measure colors on apertures.
 @node Finding reddest clumps and visual inspection, Citing and acknowledging 
Gnuastro, Aperture photometry, General program usage tutorial
 @subsection Finding reddest clumps and visual inspection
 @cindex GNU AWK
-As a final step, let's go back to the original clumps-based color
-measurement we generated in @ref{Working with catalogs estimating
-colors}. We'll find the objects with the strongest color and make a cutout
-to inspect them visually and finally, we'll see how they are located on the
-image. With the command below, we'll select the reddest objects (those with
-a color larger than 1.5):
+As a final step, let's go back to the original clumps-based color measurement 
we generated in @ref{Working with catalogs estimating colors}.
+We'll find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we'll see how they are located on the image.
+With the command below, we'll select the reddest objects (those with a color 
larger than 1.5):
 
 @example
 $ asttable cat/f105w-f160w.fits --range=COLOR_F105W_F160W,1.5,inf
 @end example
 
-We want to crop the F160W image around each of these objects, but we need a
-unique identifier for them first. We'll define this identifier using the
-object and clump labels (with an underscore between them) and feed the
-output of the command above to AWK to generate a catalog. Note that since
-we are making a plain text table, we'll define the column metadata manually
-(see @ref{Gnuastro text table format}).
+We want to crop the F160W image around each of these objects, but we need a 
unique identifier for them first.
+We'll define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
+Note that since we are making a plain text table, we'll define the column 
metadata manually (see @ref{Gnuastro text table format}).
 
 @example
 $ echo "# Column 1: ID [name, str10] Object ID" > reddest.txt
@@ -4154,12 +3209,10 @@ $ asttable cat/f105w-f160w.fits 
--range=COLOR_F105W_F160W,1.5,inf \
            >> reddest.txt
 @end example
 
-We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what
-these objects look like. To keep things clean, we'll make a directory
-called @file{crop-red} and ask Crop to save the crops in this
-directory. We'll also add a @file{-f160w.fits} suffix to the crops (to
-remind us which image they came from). The width of the crops will be 15
-arcseconds.
+We can now feed @file{reddest.txt} into Gnuastro's Crop program to see what 
these objects look like.
+To keep things clean, we'll make a directory called @file{crop-red} and ask 
Crop to save the crops in this directory.
+We'll also add a @file{-f160w.fits} suffix to the crops (to remind us which 
image they came from).
+The width of the crops will be 15 arcseconds.
 
 @example
 $ mkdir crop-red
@@ -4168,16 +3221,12 @@ $ astcrop flat-ir/xdf-f160w.fits --mode=wcs 
--namecol=ID \
           --suffix=-f160w.fits --output=crop-red
 @end example
 
-You can see all the cropped FITS files in the @file{crop-red}
-directory. Like the MakeProfiles command in @ref{Aperture photometry}, you
-might notice that the crops aren't made in order. This is because each crop
-is independent of the rest, therefore crops are done in parallel, and
-parallel operations are asynchronous. In the command above, you can change
-@file{f160w} to @file{f105w} to make the crops in both filters.
+You can see all the cropped FITS files in the @file{crop-red} directory.
+Like the MakeProfiles command in @ref{Aperture photometry}, you might notice 
that the crops aren't made in order.
+This is because each crop is independent of the rest, therefore crops are done 
in parallel, and parallel operations are asynchronous.
+In the command above, you can change @file{f160w} to @file{f105w} to make the 
crops in both filters.
 
-To view the crops more easily (not having to open ds9 for each image), you
-can convert the FITS crops into the JPEG format with a shell loop like
-below.
+To view the crops more easily (not having to open ds9 for each image), you can 
convert the FITS crops into the JPEG format with a shell loop like below.
 
 @example
 $ cd crop-red
@@ -4187,18 +3236,14 @@ $ for f in *.fits; do                                   
               \
 $ cd ..
 @end example
 
-You can now use your general graphic user interface image viewer to flip
-through the images more easily, or import them into your papers/reports.
+You can now use your general graphic user interface image viewer to flip 
through the images more easily, or import them into your papers/reports.
 
 @cindex GNU Parallel
-The @code{for} loop above to convert the images will do the job in series:
-each file is converted only after the previous one is complete. If you have
-@url{https://www.gnu.org/s/parallel, GNU Parallel}, you can greatly speed
-up this conversion. GNU Parallel will run the separate commands
-simultaneously on different CPU threads in parallel. For more information
-on efficiently using your threads, see @ref{Multi-threaded
-operations}. Here is a replacement for the shell @code{for} loop above
-using GNU Parallel.
+The @code{for} loop above to convert the images will do the job in series: 
each file is converted only after the previous one is complete.
+If you have @url{https://www.gnu.org/s/parallel, GNU Parallel}, you can 
greatly speed up this conversion.
+GNU Parallel will run the separate commands simultaneously on different CPU 
threads in parallel.
+For more information on efficiently using your threads, see 
@ref{Multi-threaded operations}.
+Here is a replacement for the shell @code{for} loop above using GNU Parallel.
 
 @example
 $ cd crop-red
@@ -4209,10 +3254,10 @@ $ cd ..
 
 @cindex DS9
 @cindex SAO DS9
-As the final action, let's see how these objects are positioned over the
-dataset. DS9 has the ``Region''s concept for this purpose. You just have to
-convert your catalog into a ``region file'' to feed into DS9. To do that,
-you can use AWK again as shown below.
+As the final action, let's see how these objects are positioned over the 
dataset.
+DS9 has the ``Region''s concept for this purpose.
+You just have to convert your catalog into a ``region file'' to feed into DS9.
+To do that, you can use AWK again as shown below.
 
 @example
 $ awk 'BEGIN@{print "# Region file format: DS9 version 4.1";      \
@@ -4222,10 +3267,8 @@ $ awk 'BEGIN@{print "# Region file format: DS9 version 
4.1";      \
       reddest.txt > reddest.reg
 @end example
 
-This region file can be loaded into DS9 with its @option{-regions} option
-to display over any image (that has world coordinate system). In the
-example below, we'll open Segment's output and load the regions over all
-the extensions (to see the image and the respective clump):
+This region file can be loaded into DS9 with its @option{-regions} option to 
display over any image (that has world coordinate system).
+In the example below, we'll open Segment's output and load the regions over 
all the extensions (to see the image and the respective clump):
 
 @example
 $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit    \
@@ -4235,14 +3278,10 @@ $ ds9 -mecube seg/xdf-f160w.fits -zscale -zoom to fit   
 \
 
 @node Citing and acknowledging Gnuastro,  , Finding reddest clumps and visual 
inspection, General program usage tutorial
 @subsection Citing and acknowledging Gnuastro
-In conclusion, we hope this extended tutorial has been a good starting
-point to help in your exciting research. If this book or any of the
-programs in Gnuastro have been useful for your research, please cite the
-respective papers, and acknowledge the funding agencies that made all of
-this possible. All Gnuastro programs have a @option{--cite} option to
-facilitate the citation and acknowledgment. Just note that it may be
-necessary to cite additional papers for different programs, so please try
-it out on all the programs that you used, for example:
+In conclusion, we hope this extended tutorial has been a good starting point 
to help in your exciting research.
+If this book or any of the programs in Gnuastro have been useful for your 
research, please cite the respective papers, and acknowledge the funding 
agencies that made all of this possible.
+All Gnuastro programs have a @option{--cite} option to facilitate the citation 
and acknowledgment.
+Just note that it may be necessary to cite additional papers for different 
programs, so please try it out on all the programs that you used, for example:
 
 @example
 $ astmkcatalog --cite
@@ -4259,67 +3298,44 @@ $ astnoisechisel --cite
 @node Detecting large extended targets,  , General program usage tutorial, 
Tutorials
 @section Detecting large extended targets
 
-The outer wings of large and extended objects can sink into the noise very
-gradually and can have a large variety of shapes (for example due to tidal
-interactions). Therefore separating the outer boundaries of the galaxies
-from the noise can be particularly tricky. Besides causing an
-under-estimation in the total estimated brightness of the target, failure
-to detect such faint wings will also cause a bias in the noise
-measurements, thereby hampering the accuracy of any measurement on the
-dataset. Therefore even if they don't constitute a significant fraction of
-the target's light, or aren't your primary target, these regions must not
-be ignored. In this tutorial, we'll walk you through the strategy of
-detecting such targets using @ref{NoiseChisel}.
+The outer wings of large and extended objects can sink into the noise very 
gradually and can have a large variety of shapes (for example due to tidal 
interactions).
+Therefore separating the outer boundaries of the galaxies from the noise can 
be particularly tricky.
+Besides causing an under-estimation in the total estimated brightness of the 
target, failure to detect such faint wings will also cause a bias in the noise 
measurements, thereby hampering the accuracy of any measurement on the dataset.
+Therefore even if they don't constitute a significant fraction of the target's 
light, or aren't your primary target, these regions must not be ignored.
+In this tutorial, we'll walk you through the strategy of detecting such 
targets using @ref{NoiseChisel}.
 
 @cartouche
 @noindent
-@strong{Don't start with this tutorial:} If you haven't already completed
-@ref{General program usage tutorial}, we strongly recommend going through
-that tutorial before starting this one. Basic features like access to this
-book on the command-line, the configuration files of Gnuastro's programs,
-benefiting from the modular nature of the programs, viewing multi-extension
-FITS files, or using NoiseChisel's outputs are discussed in more detail
-there.
+@strong{Don't start with this tutorial:} If you haven't already completed 
@ref{General program usage tutorial}, we strongly recommend going through that 
tutorial before starting this one.
+Basic features like access to this book on the command-line, the configuration 
files of Gnuastro's programs, benefiting from the modular nature of the 
programs, viewing multi-extension FITS files, or using NoiseChisel's outputs 
are discussed in more detail there.
 @end cartouche
 
 @cindex M51
 @cindex NGC5195
 @cindex SDSS, Sloan Digital Sky Survey
 @cindex Sloan Digital Sky Survey, SDSS
-We'll try to detect the faint tidal wings of the beautiful M51
-group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this
-tutorial. We'll use a dataset/image from the public
-@url{http://www.sdss.org/, Sloan Digital Sky Survey}, or SDSS. Due to its
-more peculiar low surface brightness structure/features, we'll focus on the
-dwarf companion galaxy of the group (or NGC 5195). To get the image, you
-can use SDSS's @url{https://dr12.sdss.org/fields, Simple field search}
-tool. As long as it is covered by the SDSS, you can find an image
-containing your desired target either by providing a standard name (if it
-has one), or its coordinates. To access the dataset we will use here, write
-@code{NGC5195} in the ``Object Name'' field and press ``Submit'' button.
+We'll try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
+We'll use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
+Due to its more peculiar low surface brightness structure/features, we'll 
focus on the dwarf companion galaxy of the group (or NGC 5195).
+To get the image, you can use SDSS's @url{https://dr12.sdss.org/fields, Simple 
field search} tool.
+As long as it is covered by the SDSS, you can find an image containing your 
desired target either by providing a standard name (if it has one), or its 
coordinates.
+To access the dataset we will use here, write @code{NGC5195} in the ``Object 
Name'' field and press ``Submit'' button.
 
 @cartouche
 @noindent
-@strong{Type the example commands:} Try to type the example commands on
-your terminal and use the history feature of your command-line (by pressing
-the ``up'' button to retrieve previous commands). Don't simply copy and
-paste the commands shown here. This will help simulate future situations
-when you are processing your own datasets.
+@strong{Type the example commands:} Try to type the example commands on your 
terminal and use the history feature of your command-line (by pressing the 
``up'' button to retrieve previous commands).
+Don't simply copy and paste the commands shown here.
+This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
 @cindex GNU Wget
-You can see the list of available filters under the color image. For this
-demonstration, we'll use the r-band filter image.  By clicking on the
-``r-band FITS'' link, you can download the image. Alternatively, you can
-just run the following command to download it with GNU Wget@footnote{To
-make the command easier to view on screen or in a page, we have defined the
-top URL of the image as the @code{topurl} shell variable. You can just
-replace the value of this variable with @code{$topurl} in the
-@command{wget} command.}. To keep things clean, let's also put it in a
-directory called @file{ngc5195}. With the @option{-O} option, we are asking
-Wget to save the downloaded file with a more manageable name:
-@file{r.fits.bz2} (this is an r-band image of NGC 5195, which was the
-directory name).
+You can see the list of available filters under the color image.
+For this demonstration, we'll use the r-band filter image.
+By clicking on the ``r-band FITS'' link, you can download the image.
+Alternatively, you can just run the following command to download it with GNU 
Wget@footnote{To make the command easier to view on screen or in a page, we 
have defined the top URL of the image as the @code{topurl} shell variable.
+You can just replace the value of this variable with @code{$topurl} in the 
@command{wget} command.}.
+To keep things clean, let's also put it in a directory called @file{ngc5195}.
+With the @option{-O} option, we are asking Wget to save the downloaded file 
with a more manageable name: @file{r.fits.bz2} (this is an r-band image of NGC 
5195, which was the directory name).
 
 @example
 $ mkdir ngc5195
@@ -4330,14 +3346,11 @@ $ wget 
$topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
 
 @cindex Bzip2
 @noindent
-This server keeps the files in a Bzip2 compressed file format. So we'll
-first decompress it with the following command. By convention, compression
-programs delete the original file (compressed when uncompressing, or
-uncompressed when compressing). To keep the original file, you can use the
-@option{--keep} or @option{-k} option which is available in most
-compression programs for this job. Here, we don't need the compressed file
-any more, so we'll just let @command{bunzip} delete it for us and keep the
-directory clean.
+This server keeps the files in a Bzip2 compressed file format.
+So we'll first decompress it with the following command.
+By convention, compression programs delete the original file (compressed when 
uncompressing, or uncompressed when compressing).
+To keep the original file, you can use the @option{--keep} or @option{-k} 
option which is available in most compression programs for this job.
+Here, we don't need the compressed file any more, so we'll just let 
@command{bunzip} delete it for us and keep the directory clean.
 
 @example
 $ bunzip2 r.fits.bz2
@@ -4351,193 +3364,129 @@ $ bunzip2 r.fits.bz2
 
 @node NoiseChisel optimization, Achieved surface brightness level, Detecting 
large extended targets, Detecting large extended targets
 @subsection NoiseChisel optimization
-In @ref{Detecting large extended targets} we downloaded the single exposure
-SDSS image. Let's see how NoiseChisel operates on it with its default
-parameters:
+In @ref{Detecting large extended targets} we downloaded the single exposure 
SDSS image.
+Let's see how NoiseChisel operates on it with its default parameters:
 
 @example
 $ astnoisechisel r.fits -h0
 @end example
 
-As described in @ref{Multiextension FITS files NoiseChisel's output},
-NoiseChisel's default output is a multi-extension FITS file. Open the
-output @file{r_detected.fits} file and have a look at the extensions, the
-first extension is only meta-data and contains NoiseChisel's configuration
-parameters. The rest are the Sky-subtracted input, the detection map, Sky
-values and Sky standard deviation.
+As described in @ref{Multiextension FITS files NoiseChisel's output}, 
NoiseChisel's default output is a multi-extension FITS file.
+Open the output @file{r_detected.fits} file and have a look at the extensions, 
the first extension is only meta-data and contains NoiseChisel's configuration 
parameters.
+The rest are the Sky-subtracted input, the detection map, Sky values and Sky 
standard deviation.
 
 @example
 $ ds9 -mecube r_detected.fits -zscale -zoom to fit
 @end example
 
-Flipping through the extensions in a FITS viewer, you will see that the
-first image (Sky-subtracted image) looks reasonable: there are no major
-artifacts due to bad Sky subtraction compared to the input. The second
-extension also seems reasonable with a large detection map that covers the
-whole of NGC5195, but also extends beyond towards the bottom of the
-image.
+Flipping through the extensions in a FITS viewer, you will see that the first 
image (Sky-subtracted image) looks reasonable: there are no major artifacts due 
to bad Sky subtraction compared to the input.
+The second extension also seems reasonable with a large detection map that 
covers the whole of NGC5195, but also extends beyond towards the bottom of the 
image.
 
 Now try flipping between the @code{DETECTIONS} and @code{SKY} extensions.
-In the @code{SKY} extension, you'll notice that there is still significant
-signal beyond the detected pixels. You can tell that this signal belongs to
-the galaxy because the far-right side of the image is dark and the brighter
-tiles are surrounding the detected pixels.
-
-The fact that signal from the galaxy remains in the Sky dataset shows that
-you haven't done a good detection. The @code{SKY} extension must not
-contain any light around the galaxy. Generally, any time your target is
-much larger than the tile size and the signal is almost flat (like this
-case), this @emph{will} happen. Therefore, when there are large objects in
-the dataset, @strong{the best place} to check the accuracy of your
-detection is the estimated Sky image.
-
-When dominated by the background, noise has a symmetric
-distribution. However, signal is not symmetric (we don't have negative
-signal). Therefore when non-constant signal is present in a noisy dataset,
-the distribution will be positively skewed. This skewness is a good measure
-of how much signal we have in the distribution. The skewness can be
-accurately measured by the difference in the mean and median: assuming no
-strong outliers, the more distant they are, the more skewed the dataset
-is. For more see @ref{Quantifying signal in a tile}.
-
-However, skewness is only a proxy for signal when the signal has structure
-(varies per pixel). Therefore, when it is approximately constant over a
-whole tile, or sub-set of the image, the signal's effect is just to shift
-the symmetric center of the noise distribution to the positive and there
-won't be any skewness (major difference between the mean and median). This
-positive@footnote{In processed images, where the Sky value can be
-over-estimated, this constant shift can be negative.} shift that preserves
-the symmetric distribution is the Sky value. When there is a gradient over
-the dataset, different tiles will have different constant
-shifts/Sky-values, for example see Figure 11 of
-@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+In the @code{SKY} extension, you'll notice that there is still significant 
signal beyond the detected pixels.
+You can tell that this signal belongs to the galaxy because the far-right side 
of the image is dark and the brighter tiles are surrounding the detected pixels.
+
+The fact that signal from the galaxy remains in the Sky dataset shows that you 
haven't done a good detection.
+The @code{SKY} extension must not contain any light around the galaxy.
+Generally, any time your target is much larger than the tile size and the 
signal is almost flat (like this case), this @emph{will} happen.
+Therefore, when there are large objects in the dataset, @strong{the best 
place} to check the accuracy of your detection is the estimated Sky image.
+
+When dominated by the background, noise has a symmetric distribution.
+However, signal is not symmetric (we don't have negative signal).
+Therefore when non-constant signal is present in a noisy dataset, the 
distribution will be positively skewed.
+This skewness is a good measure of how much signal we have in the distribution.
+The skewness can be accurately measured by the difference in the mean and 
median: assuming no strong outliers, the more distant they are, the more skewed 
the dataset is.
+For more see @ref{Quantifying signal in a tile}.
+
+However, skewness is only a proxy for signal when the signal has structure 
(varies per pixel).
+Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the signal's effect is just to shift the symmetric center of the 
noise distribution to the positive and there won't be any skewness (major 
difference between the mean and median).
+This positive@footnote{In processed images, where the Sky value can be 
over-estimated, this constant shift can be negative.} shift that preserves the 
symmetric distribution is the Sky value.
+When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
 
-To get less scatter in measuring the mean and median (and thus better
-estimate the skewness), you will need a larger tile. So let's play with the
-tessellation a little to see how it affects the result. In Gnuastro, you
-can see the option values (@option{--tilesize} in this case) by adding the
-@option{-P} option to your last command. Try running NoiseChisel with
-@option{-P} to see its default tile size.
+To get less scatter in measuring the mean and median (and thus better estimate 
the skewness), you will need a larger tile.
+So let's play with the tessellation a little to see how it affects the result.
+In Gnuastro, you can see the option values (@option{--tilesize} in this case) 
by adding the @option{-P} option to your last command.
+Try running NoiseChisel with @option{-P} to see its default tile size.
 
-You can clearly see that the default tile size is indeed much smaller than
-this (huge) galaxy and its tidal features. As a result, NoiseChisel was
-unable to identify the skewness within the tiles under the outer parts of
-M51 and NGC 5159 and the threshold has been over-estimated on those
-tiles. To see which tiles were used for estimating the quantile threshold
-(no skewness was measured), you can use NoiseChisel's
-@option{--checkqthresh} option:
+You can clearly see that the default tile size is indeed much smaller than 
this (huge) galaxy and its tidal features.
+As a result, NoiseChisel was unable to identify the skewness within the tiles 
under the outer parts of M51 and NGC 5159 and the threshold has been 
over-estimated on those tiles.
+To see which tiles were used for estimating the quantile threshold (no 
skewness was measured), you can use NoiseChisel's @option{--checkqthresh} 
option:
 
 @example
 $ astnoisechisel r.fits -h0 --checkqthresh
 @end example
 
-Notice how this option doesn't allow NoiseChisel to finish. NoiseChisel
-aborted after finding and applying the quantile thresholds. When you call
-any of NoiseChisel's @option{--check*} options, by default, it will abort
-as soon as all the check steps have been written in the check file (a
-multi-extension FITS file). This allows you to focus on the problem you
-wanted to check as soon as possible (you can disable this feature with the
-@option{--continueaftercheck} option).
-
-To optimize the threshold-related settings for this image, let's playing
-with this quantile threshold check image a little. Don't forget that
-``@emph{Good statistical analysis is not a purely routine matter, and
-generally calls for more than one pass through the computer}'' (Anscombe
-1973, see @ref{Science and its tools}). A good scientist must have a good
-understanding of her tools to make a meaningful analysis. So don't hesitate
-in playing with the default configuration and reviewing the manual when you
-have a new dataset in front of you. Robust data analysis is an art,
-therefore a good scientist must first be a good artist.
-
-The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the
-convolved input image where the threshold(s) is(are) defined and
-applied. For more on the effect of convolution and thresholding, see
-Sections 3.1.1 and 3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi
-and Ichikawa [2015]}. The second extension (@code{QTHRESH_ERODE}) has a
-blank value for all the pixels of any tile that was identified as having
-significant signal. The next two extensions (@code{QTHRESH_NOERODE} and
-@code{QTHRESH_EXPAND}) are the other two quantile thresholds that are
-necessary in NoiseChisel's later steps. Every step in this file is repeated
-on the three thresholds.
-
-Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you
-clearly see how the non-blank tiles around NGC 5195 have a gradient. As one
-line of attack against discarding too much signal below the threshold,
-NoiseChisel rejects outlier tiles. Go forward by three extensions to
-@code{VALUE1_NO_OUTLIER} and you will see that many of the tiles over the
-galaxy have been removed in this step. For more on the outlier rejection
-algorithm, see the latter half of @ref{Quantifying signal in a tile}.
-
-However, the default outlier rejection parameters weren't enough, and when
-you play with the color-bar, you still see a strong gradient around the
-outer tidal feature of the galaxy. You have two strategies for fixing this
-problem: 1) Increase the tile size to get more accurate measurements of
-skewness. 2) Strengthen the outlier rejection parameters to discard more of
-the tiles with signal. Fortunately in this image we have a sufficiently
-large region on the right of the image that the galaxy doesn't extend
-to. So we can use the more robust first solution. In situations where this
-doesn't happen (for example if the field of view in this image was shifted
-to have more of M51 and less sky) you are limited to a combination of the
-two solutions or just to the second solution.
+Notice how this option doesn't allow NoiseChisel to finish.
+NoiseChisel aborted after finding and applying the quantile thresholds.
+When you call any of NoiseChisel's @option{--check*} options, by default, it 
will abort as soon as all the check steps have been written in the check file 
(a multi-extension FITS file).
+This allows you to focus on the problem you wanted to check as soon as 
possible (you can disable this feature with the @option{--continueaftercheck} 
option).
+
+To optimize the threshold-related settings for this image, let's playing with 
this quantile threshold check image a little.
+Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
+So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset in front of you.
+Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
+
+The first extension of @file{r_qthresh.fits} (@code{CONVOLVED}) is the 
convolved input image where the threshold(s) is(are) defined and applied.
+For more on the effect of convolution and thresholding, see Sections 3.1.1 and 
3.1.2 of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+The second extension (@code{QTHRESH_ERODE}) has a blank value for all the 
pixels of any tile that was identified as having significant signal.
+The next two extensions (@code{QTHRESH_NOERODE} and @code{QTHRESH_EXPAND}) are 
the other two quantile thresholds that are necessary in NoiseChisel's later 
steps.
+Every step in this file is repeated on the three thresholds.
+
+Play a little with the color bar of the @code{QTHRESH_ERODE} extension, you 
clearly see how the non-blank tiles around NGC 5195 have a gradient.
+As one line of attack against discarding too much signal below the threshold, 
NoiseChisel rejects outlier tiles.
+Go forward by three extensions to @code{VALUE1_NO_OUTLIER} and you will see 
that many of the tiles over the galaxy have been removed in this step.
+For more on the outlier rejection algorithm, see the latter half of 
@ref{Quantifying signal in a tile}.
+
+However, the default outlier rejection parameters weren't enough, and when you 
play with the color-bar, you still see a strong gradient around the outer tidal 
feature of the galaxy.
+You have two strategies for fixing this problem: 1) Increase the tile size to 
get more accurate measurements of skewness.
+2) Strengthen the outlier rejection parameters to discard more of the tiles 
with signal.
+Fortunately in this image we have a sufficiently large region on the right of 
the image that the galaxy doesn't extend to.
+So we can use the more robust first solution.
+In situations where this doesn't happen (for example if the field of view in 
this image was shifted to have more of M51 and less sky) you are limited to a 
combination of the two solutions or just to the second solution.
 
 @cartouche
 @noindent
-@strong{Skipping convolution for faster tests:} The slowest step of
-NoiseChisel is the convolution of the input dataset. Therefore when your
-dataset is large (unlike the one in this test), and you are not changing
-the input dataset or kernel in multiple runs (as in the tests of this
-tutorial), it is faster to do the convolution separately once (using
-@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to
-directly feed the convolved image and avoid convolution. For more on
-@option{--convolved}, see @ref{NoiseChisel input}.
+@strong{Skipping convolution for faster tests:} The slowest step of 
NoiseChisel is the convolution of the input dataset.
+Therefore when your dataset is large (unlike the one in this test), and you 
are not changing the input dataset or kernel in multiple runs (as in the tests 
of this tutorial), it is faster to do the convolution separately once (using 
@ref{Convolve}) and use NoiseChisel's @option{--convolved} option to directly 
feed the convolved image and avoid convolution.
+For more on @option{--convolved}, see @ref{NoiseChisel input}.
 @end cartouche
 
-To identify the skewness caused by the flat NGC 5195 and M51 tidal features
-on the tiles under it, we thus have to choose a tile size that is larger
-than the gradient of the signal. Let's try a tile size of 75 by 75 pixels:
+To identify the skewness caused by the flat NGC 5195 and M51 tidal features on 
the tiles under it, we thus have to choose a tile size that is larger than the 
gradient of the signal.
+Let's try a tile size of 75 by 75 pixels:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --checkqthresh
 @end example
 
-You can clearly see the effect of this increased tile size: the tiles are
-much larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that
-almost all the previous tiles under the galaxy have been discarded and we
-only have a few tiles on the edge with a gradient. So let's define a more
-strict condition to keep tiles:
+You can clearly see the effect of this increased tile size: the tiles are much 
larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that almost all 
the previous tiles under the galaxy have been discarded and we only have a few 
tiles on the edge with a gradient.
+So let's define a more strict condition to keep tiles:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --checkqthresh
 @end example
 
-After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a
-different error. Please read it: at the start, it says that only 6 tiles
-passed the constraint while you have asked for 9. The @file{r_qthresh.fits}
-image also only has 8 extensions (not the original 15). Take a look at the
-initially selected tiles and those after outlier rejection. You can see the
-place of the tiles that passed. They seem to be in the good place (very far
-away from the M51 group and its tidal feature. Using the 6 nearest
-neighbors is also not too bad. So let's decrease the number of neighboring
-tiles for interpolation so NoiseChisel can continue:
+After constraining @code{--meanmedqdiff}, NoiseChisel stopped with a different 
error.
+Please read it: at the start, it says that only 6 tiles passed the constraint 
while you have asked for 9.
+The @file{r_qthresh.fits} image also only has 8 extensions (not the original 
15).
+Take a look at the initially selected tiles and those after outlier rejection.
+You can see the place of the tiles that passed.
+They seem to be in the good place (very far away from the M51 group and its 
tidal feature.
+Using the 6 nearest neighbors is also not too bad.
+So let's decrease the number of neighboring tiles for interpolation so 
NoiseChisel can continue:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --interpnumngb=6 --checkqthresh
 @end example
 
-The next group of extensions (those ending with @code{_INTERP}), give a
-value to all blank tiles based on the nearest tiles with a measurement. The
-following group of extensions (ending with @code{_SMOOTH}) have smoothed
-the interpolated image to avoid sharp cuts on tile edges. Inspecting
-@code{THRESH1_SMOOTH}, you can see that there is no longer any significant
-gradient and no major signature of NGC 5195 exists.
+The next group of extensions (those ending with @code{_INTERP}), give a value 
to all blank tiles based on the nearest tiles with a measurement.
+The following group of extensions (ending with @code{_SMOOTH}) have smoothed 
the interpolated image to avoid sharp cuts on tile edges.
+Inspecting @code{THRESH1_SMOOTH}, you can see that there is no longer any 
significant gradient and no major signature of NGC 5195 exists.
 
-We can now remove @option{--checkqthresh} and let NoiseChisel proceed with
-its detection. Also, similar to the argument in @ref{NoiseChisel
-optimization for detection}, in the command above, we set the
-pseudo-detection signal-to-noise ratio quantile (@option{--snquant}) to
-0.95.
+We can now remove @option{--checkqthresh} and let NoiseChisel proceed with its 
detection.
+Also, similar to the argument in @ref{NoiseChisel optimization for detection}, 
in the command above, we set the pseudo-detection signal-to-noise ratio 
quantile (@option{--snquant}) to 0.95.
 
 @example
 $ rm r_qthresh.fits
@@ -4545,40 +3494,27 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001 \
                  --interpnumngb=6 --snquant=0.95
 @end example
 
-Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see
-the right-ward edges in particular have many holes that are fully
-surrounded by signal and the signal stretches out in the noise very thinly
-(the size of the holes increases as we go out). This suggests that there is
-still signal that can be detected. You can confirm this guess by looking at
-the @code{SKY} extension to see that indeed, there is a clear footprint of
-the M51 group in the Sky image (which is not good!). Therefore, we should
-dig deeper into the noise.
+Looking at the @code{DETECTIONS} extension of NoiseChisel's output, we see the 
right-ward edges in particular have many holes that are fully surrounded by 
signal and the signal stretches out in the noise very thinly (the size of the 
holes increases as we go out).
+This suggests that there is still signal that can be detected.
+You can confirm this guess by looking at the @code{SKY} extension to see that 
indeed, there is a clear footprint of the M51 group in the Sky image (which is 
not good!).
+Therefore, we should dig deeper into the noise.
 
-With the @option{--detgrowquant} option, NoiseChisel will use the
-detections as seeds and grow them in to the noise. Its value is the
-ultimate limit of the growth in units of quantile (between 0 and
-1). Therefore @option{--detgrowquant=1} means no growth and
-@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which
-is usually too much!). Try running the previous command with various values
-(from 0.6 to higher values) to see this option's effect. For this
-particularly huge galaxy (with signal that extends very gradually into the
-noise), we'll set it to @option{0.65}:
+With the @option{--detgrowquant} option, NoiseChisel will use the detections 
as seeds and grow them in to the noise.
+Its value is the ultimate limit of the growth in units of quantile (between 0 
and 1).
+Therefore @option{--detgrowquant=1} means no growth and 
@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is 
usually too much!).
+Try running the previous command with various values (from 0.6 to higher 
values) to see this option's effect.
+For this particularly huge galaxy (with signal that extends very gradually 
into the noise), we'll set it to @option{0.65}:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001 \
                  --interpnumngb=6 --snquant=0.95 --detgrowquant=0.65
 @end example
 
-Beyond this level (smaller @option{--detgrowquant} values), you see the
-smaller background galaxies starting to create thin spider-leg-like
-features, showing that we are following correlated noise for too much.
+Beyond this level (smaller @option{--detgrowquant} values), you see the 
smaller background galaxies starting to create thin spider-leg-like features, 
showing that we are following correlated noise for too much.
 
-Now, when you look at the @code{DETECTIONS} extension, you see the wings of
-the galaxy being detected much farther out, But you also see many holes
-which are clearly just caused by noise. After growing the objects,
-NoiseChisel also allows you to fill such holes when they are smaller than a
-certain size through the @option{--detgrowmaxholesize} option. In this
-case, a maximum area/size of 10,000 pixels seems to be good:
+Now, when you look at the @code{DETECTIONS} extension, you see the wings of 
the galaxy being detected much farther out, But you also see many holes which 
are clearly just caused by noise.
+After growing the objects, NoiseChisel also allows you to fill such holes when 
they are smaller than a certain size through the @option{--detgrowmaxholesize} 
option.
+In this case, a maximum area/size of 10,000 pixels seems to be good:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001    \
@@ -4586,17 +3522,13 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001    \
                  --detgrowmaxholesize=10000
 @end example
 
-The detection looks good now, but when you look in to the @code{SKY}
-extension, you still clearly still see a footprint of the galaxy. We'll
-leave it as an exercise for you to play with NoiseChisel further and
-improve the detected pixels.
+The detection looks good now, but when you look in to the @code{SKY} 
extension, you still clearly still see a footprint of the galaxy.
+We'll leave it as an exercise for you to play with NoiseChisel further and 
improve the detected pixels.
 
-So, we'll just stop with one last tool NoiseChisel gives you to get a
-slightly better estimation of the Sky: @option{--minskyfrac}. On each tile,
-NoiseChisel will only measure the Sky-level if the fraction of undetected
-pixels is larger than the value given to this option. To avoid the edges of
-the galaxy, we'll set it to @option{0.9}. Therefore, tiles that are covered
-by detected pixels for more than @mymath{10\%} of their area are ignored.
+So, we'll just stop with one last tool NoiseChisel gives you to get a slightly 
better estimation of the Sky: @option{--minskyfrac}.
+On each tile, NoiseChisel will only measure the Sky-level if the fraction of 
undetected pixels is larger than the value given to this option.
+To avoid the edges of the galaxy, we'll set it to @option{0.9}.
+Therefore, tiles that are covered by detected pixels for more than 
@mymath{10\%} of their area are ignored.
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=75,75 --meanmedqdiff=0.001    \
@@ -4604,20 +3536,16 @@ $ astnoisechisel r.fits -h0 --tilesize=75,75 
--meanmedqdiff=0.001    \
                  --detgrowmaxholesize=10000 --minskyfrac=0.9
 @end example
 
-The footprint of the galaxy still exists in the @code{SKY} extension, but
-it has decreased in significance now. Let's calculate the significance of
-the undetected gradient, in units of noise. Since the gradient is roughly
-along the horizontal axis, we'll collapse the image along the second
-(vertical) FITS dimension to have a 1D array (a table column, see its
-values with the second command).
+The footprint of the galaxy still exists in the @code{SKY} extension, but it 
has decreased in significance now.
+Let's calculate the significance of the undetected gradient, in units of noise.
+Since the gradient is roughly along the horizontal axis, we'll collapse the 
image along the second (vertical) FITS dimension to have a 1D array (a table 
column, see its values with the second command).
 
 @example
 $ astarithmetic r_detected.fits 2 collapse-mean -hSKY -ocollapsed.fits
 $ asttable collapsed.fits
 @end example
 
-We can now calculate the minimum and maximum values of this array and
-define their difference (in units of noise) as the gradient:
+We can now calculate the minimum and maximum values of this array and define 
their difference (in units of noise) as the gradient:
 
 @example
 $ grad=$(astarithmetic r_detected.fits 2 collapse-mean set-i   \
@@ -4628,89 +3556,57 @@ $ echo $std
 $ astarithmetic -q $grad $std /
 @end example
 
-The undetected gradient (@code{grad} above) is thus roughly a quarter of
-the noise. But don't forget that this is per-pixel: individually its small,
-but it extends over millions of pixels, so the total flux may still be
-relevant.
-
-When looking at the raw input shallow image, you don't see anything so far
-out of the galaxy. You might just think that ``this is all noise, I have
-just dug too deep and I'm following systematics''! If you feel like this,
-have a look at the deep images of this system in
-@url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour
-deep image of this system (with a 12-inch telescope):
-@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from
-this Reddit discussion:
-@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
 In
-these deeper images you see that the outer edges of the M51 group clearly
-follow this exact structure, below in @ref{Achieved surface brightness
-level}, we'll measure the exact level.
-
-As the gradient in the @code{SKY} extension shows, and the deep images
-cited above confirm, the galaxy's signal extends even beyond this. But this
-is already far deeper than what most (if not all) other tools can detect.
-Therefore, we'll stop configuring NoiseChisel at this point in the tutorial
-and let you play with it a little more while reading more about it in
-@ref{NoiseChisel}.
-
-After finishing this tutorial please go through the NoiseChisel paper and
-its options and play with them to further decrease the gradient. This will
-greatly help you get a good feeling of the options. When you do find a
-better configuration, please send it to us and we'll mention your name here
-with your suggested configuration. Don't forget that good data analysis is
-an art, so like a sculptor, master your chisel for a good result.
+The undetected gradient (@code{grad} above) is thus roughly a quarter of the 
noise.
+But don't forget that this is per-pixel: individually its small, but it 
extends over millions of pixels, so the total flux may still be relevant.
+
+When looking at the raw input shallow image, you don't see anything so far out 
of the galaxy.
+You might just think that ``this is all noise, I have just dug too deep and 
I'm following systematics''! If you feel like this, have a look at the deep 
images of this system in @url{https://arxiv.org/abs/1501.04599, Watkins et al. 
[2015]}, or a 12 hour deep image of this system (with a 12-inch telescope): 
@url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from this 
Reddit discussion: 
@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_wh
 [...]
+In these deeper images you see that the outer edges of the M51 group clearly 
follow this exact structure, below in @ref{Achieved surface brightness level}, 
we'll measure the exact level.
+
+As the gradient in the @code{SKY} extension shows, and the deep images cited 
above confirm, the galaxy's signal extends even beyond this.
+But this is already far deeper than what most (if not all) other tools can 
detect.
+Therefore, we'll stop configuring NoiseChisel at this point in the tutorial 
and let you play with it a little more while reading more about it in 
@ref{NoiseChisel}.
+
+After finishing this tutorial please go through the NoiseChisel paper and its 
options and play with them to further decrease the gradient.
+This will greatly help you get a good feeling of the options.
+When you do find a better configuration, please send it to us and we'll 
mention your name here with your suggested configuration.
+Don't forget that good data analysis is an art, so like a sculptor, master 
your chisel for a good result.
 
 @cartouche
 @noindent
-@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this
-configuration blindly on another image. As you saw above, the reason we
-chose this particular configuration for NoiseChisel to detect the wings of
-the M51 group was strongly influenced by the noise properties of this
-particular image. So as long as your image noise has similar properties
-(from the same data-reduction step of the same database), you can use this
-configuration on any image. For images from other instruments, or
-higher-level/reduced SDSS products, please follow a similar logic to what
-was presented here and find the best configuration yourself.
+@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use this 
configuration blindly on another image.
+As you saw above, the reason we chose this particular configuration for 
NoiseChisel to detect the wings of the M51 group was strongly influenced by the 
noise properties of this particular image.
+So as long as your image noise has similar properties (from the same 
data-reduction step of the same database), you can use this configuration on 
any image.
+For images from other instruments, or higher-level/reduced SDSS products, 
please follow a similar logic to what was presented here and find the best 
configuration yourself.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Smart NoiseChisel:} As you saw during this section, there is a
-clear logic behind the optimal parameter value for each dataset. Therefore,
-we plan to capabilities to (optionally) automate some of the choices made
-here based on the actual dataset, please join us in doing this if you are
-interested. However, given the many problems in existing ``smart''
-solutions, such automatic changing of the configuration may cause more
-problems than they solve. So even when they are implemented, we would
-strongly recommend quality checks for a robust analysis.
+@strong{Smart NoiseChisel:} As you saw during this section, there is a clear 
logic behind the optimal parameter value for each dataset.
+Therefore, we plan to capabilities to (optionally) automate some of the 
choices made here based on the actual dataset, please join us in doing this if 
you are interested.
+However, given the many problems in existing ``smart'' solutions, such 
automatic changing of the configuration may cause more problems than they solve.
+So even when they are implemented, we would strongly recommend quality checks 
for a robust analysis.
 @end cartouche
 
 @node Achieved surface brightness level,  , NoiseChisel optimization, 
Detecting large extended targets
 @subsection Achieved surface brightness level
-In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel
-for a single-exposure SDSS image of the M51 group. let's measure how deep
-we carved the signal out of noise. For this measurement, we'll need to
-estimate the average flux on the outer edges of the detection. Fortunately
-all this can be done with a few simple commands (and no higher-level
-language mini-environments like Python or IRAF) using @ref{Arithmetic} and
-@ref{MakeCatalog}.
+In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a 
single-exposure SDSS image of the M51 group.
+Let's measure how deep we carved the signal out of noise.
+For this measurement, we'll need to estimate the average flux on the outer 
edges of the detection.
+Fortunately all this can be done with a few simple commands (and no 
higher-level language mini-environments like Python or IRAF) using 
@ref{Arithmetic} and @ref{MakeCatalog}.
 
 @cindex Opening
-First, let's separate each detected region, or give a unique label/counter
-to all the connected pixels of NoiseChisel's detection map:
+First, let's separate each detected region, or give a unique label/counter to 
all the connected pixels of NoiseChisel's detection map:
 
 @example
 $ det="r_detected.fits -hDETECTIONS"
 $ astarithmetic $det 2 connected-components -olabeled.fits
 @end example
 
-You can find the label of the main galaxy visually (by opening the
-image and hovering your mouse over the M51 group's label). But to have a
-little more fun, lets do this automatically. The M51 group detection is by
-far the largest detection in this image, this allows us to find the
-ID/label that corresponds to it. We'll first run MakeCatalog to find the
-area of all the detections, then we'll use AWK to find the ID of the
-largest object and keep it as a shell variable (@code{id}):
+You can find the label of the main galaxy visually (by opening the image and 
hovering your mouse over the M51 group's label).
+But to have a little more fun, lets do this automatically.
+The M51 group detection is by far the largest detection in this image, this 
allows us to find the ID/label that corresponds to it.
+We'll first run MakeCatalog to find the area of all the detections, then we'll 
use AWK to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
 
 @example
 $ astmkcatalog labeled.fits --ids --geoarea -h1 -ocat.txt
@@ -4718,10 +3614,9 @@ $ id=$(awk '!/^#/@{if($2>max) @{id=$1; max=$2@}@} 
END@{print id@}' cat.txt)
 $ echo $id
 @end example
 
-To separate the outer edges of the detections, we'll need to ``erode'' the
-M51 group detection. We'll erode three times (to have more pixels and thus
-less scatter), using a maximum connectivity of 2 (8-connected
-neighbors). We'll then save the output in @file{eroded.fits}.
+To separate the outer edges of the detections, we'll need to ``erode'' the M51 
group detection.
+We'll erode three times (to have more pixels and thus less scatter), using a 
maximum connectivity of 2 (8-connected neighbors).
+We'll then save the output in @file{eroded.fits}.
 
 @example
 $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
@@ -4729,50 +3624,37 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 
erode \
 @end example
 
 @noindent
-In @file{labeled.fits}, we can now set all the 1-valued pixels of
-@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to
-the previous command. We'll need the pixels of the M51 group in
-@code{labeled.fits} two times: once to do the erosion, another time to find
-the outer pixel layer. To do this (and be efficient and more readable)
-we'll use the @code{set-i} operator. In the command below, it will
-save/set/name the pixels of the M51 group as the `@code{i}'. In this way we
-can use it any number of times afterwards, while only reading it from disk
-and finding M51's pixels once.
+In @file{labeled.fits}, we can now set all the 1-valued pixels of 
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the 
previous command.
+We'll need the pixels of the M51 group in @code{labeled.fits} two times: once 
to do the erosion, another time to find the outer pixel layer.
+To do this (and be efficient and more readable) we'll use the @code{set-i} 
operator.
+In the command below, it will save/set/name the pixels of the M51 group as the 
`@code{i}'.
+In this way we can use it any number of times afterwards, while only reading 
it from disk and finding M51's pixels once.
 
 @example
 $ astarithmetic labeled.fits $id eq set-i i \
                 i 2 erode 2 erode 2 erode 0 where -oedge.fits
 @end example
 
-Open the image and have a look. You'll see that the detected edge of the
-M51 group is now clearly visible. You can use @file{edge.fits} to mark
-(set to blank) this boundary on the input image and get a visual feeling of
-how far it extends:
+Open the image and have a look.
+You'll see that the detected edge of the M51 group is now clearly visible.
+You can use @file{edge.fits} to mark (set to blank) this boundary on the input 
image and get a visual feeling of how far it extends:
 
 @example
 $ astarithmetic r.fits edge.fits nan where -ob-masked.fits -h0
 @end example
 
-To quantify how deep we have detected the low-surface brightness regions,
-we'll use the command below. In short it just divides all the non-zero
-pixels of @file{edge.fits} in the Sky subtracted input (first extension
-of NoiseChisel's output) by the pixel standard deviation of the same
-pixel. This will give us a signal-to-noise ratio image. The mean value of
-this image shows the level of surface brightness that we have achieved.
-
-You can also break the command below into multiple calls to Arithmetic and
-create temporary files to understand it better. However, if you have a look
-at @ref{Reverse polish notation} and @ref{Arithmetic operators}, you should
-be able to easily understand what your computer does when you run this
-command@footnote{@file{edge.fits} (extension @code{1}) is a binary (0
-or 1 valued) image. Applying the @code{not} operator on it, just flips all
-its pixels. Through the @code{where} operator, we are setting all the newly
-1-valued pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY})
-to NaN/blank. In the second line, we are dividing all the non-blank values
-by @file{r_detected.fits} (extension @code{SKY_STD}). This gives the
-signal-to-noise ratio for each of the pixels on the boundary. Finally, with
-the @code{meanvalue} operator, we are taking the mean value of all the
-non-blank pixels and reporting that as a single number.}.
+To quantify how deep we have detected the low-surface brightness regions, 
we'll use the command below.
+In short it just divides all the non-zero pixels of @file{edge.fits} in the 
Sky subtracted input (first extension of NoiseChisel's output) by the pixel 
standard deviation of the same pixel.
+This will give us a signal-to-noise ratio image.
+The mean value of this image shows the level of surface brightness that we 
have achieved.
+
+You can also break the command below into multiple calls to Arithmetic and 
create temporary files to understand it better.
+However, if you have a look at @ref{Reverse polish notation} and 
@ref{Arithmetic operators}, you should be able to easily understand what your 
computer does when you run this command@footnote{@file{edge.fits} (extension 
@code{1}) is a binary (0 or 1 valued) image.
+Applying the @code{not} operator on it, just flips all its pixels.
+Through the @code{where} operator, we are setting all the newly 1-valued 
pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
+In the second line, we are dividing all the non-blank values by 
@file{r_detected.fits} (extension @code{SKY_STD}).
+This gives the signal-to-noise ratio for each of the pixels on the boundary.
+Finally, with the @code{meanvalue} operator, we are taking the mean value of 
all the non-blank pixels and reporting that as a single number.}.
 
 @example
 $ edge="edge.fits -h1"
@@ -4783,14 +3665,10 @@ $ astarithmetic $skysub $skystd / $edge not nan where   
    \
 @end example
 
 @cindex Surface brightness
-We have thus detected the wings of the M51 group down to roughly 1/4th of
-the noise level in this image! But the signal-to-noise ratio is a relative
-measurement. Let's also measure the depth of our detection in absolute
-surface brightness units; or magnitudes per square arcseconds. To find out,
-we'll first need to calculate how many pixels of this image are in one
-arcsecond-squared. Fortunately the world coordinate system (or WCS) meta
-data of Gnuastro's output FITS files (in particular the @code{CDELT}
-keywords) give us this information.
+We have thus detected the wings of the M51 group down to roughly 1/4th of the 
noise level in this image! But the signal-to-noise ratio is a relative 
measurement.
+Let's also measure the depth of our detection in absolute surface brightness 
units; or magnitudes per square arcseconds.
+To find out, we'll first need to calculate how many pixels of this image are 
in one arcsecond-squared.
+Fortunately the world coordinate system (or WCS) meta data of Gnuastro's 
output FITS files (in particular the @code{CDELT} keywords) give us this 
information.
 
 @example
 $ pixscale=$(astfits r_detected.fits -h1                           \
@@ -4799,9 +3677,8 @@ $ echo $pixscale
 @end example
 
 @noindent
-Note that we multiplied the value by 3600 so we work in units of
-arc-seconds not degrees. Now, let's calculate the average sky-subtracted
-flux in the border region per pixel.
+Note that we multiplied the value by 3600 so we work in units of arc-seconds 
not degrees.
+Now, let's calculate the average sky-subtracted flux in the border region per 
pixel.
 
 @example
 $ f=$(astarithmetic r_detected.fits edge.fits not nan where set-i \
@@ -4810,15 +3687,11 @@ $ echo $f
 @end example
 
 @noindent
-We can just multiply the two to get the average flux on this border in one
-arcsecond squared. We also have the r-band SDSS zeropoint
-magnitude@footnote{From
-@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be
-24.80. Therefore we can get the surface brightness of the outer edge (in
-magnitudes per arcsecond squared) using the following command. Just note
-that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't have a
-@code{log10} operator. So we'll do an extra division by @code{log(10)} to
-correct for this.
+We can just multiply the two to get the average flux on this border in one 
arcsecond squared.
+We also have the r-band SDSS zeropoint magnitude@footnote{From 
@url{http://classic.sdss.org/dr7/algorithms/fluxcal.html}} to be 24.80.
+Therefore we can get the surface brightness of the outer edge (in magnitudes 
per arcsecond squared) using the following command.
+Just note that @code{log} in AWK is in base-2 (not 10), and that AWK doesn't 
have a @code{log10} operator.
+So we'll do an extra division by @code{log(10)} to correct for this.
 
 @example
 $ z=24.80
@@ -4826,26 +3699,16 @@ $ echo "$pixscale $f $z" | awk '@{print 
-2.5*log($1*$2)/log(10)+$3@}'
 --> 28.2989
 @end example
 
-On a single-exposure SDSS image, we have reached a surface brightness limit
-fainter than 28 magnitudes per arcseconds squared!
+On a single-exposure SDSS image, we have reached a surface brightness limit 
fainter than 28 magnitudes per arcseconds squared!
 
-In interpreting this value, you should just have in mind that NoiseChisel
-works based on the contiguity of signal in the pixels. Therefore the larger
-the object, the deeper NoiseChisel can carve it out of the noise. In other
-words, this reported depth, is only for this particular object and dataset,
-processed with this particular NoiseChisel configuration: if the M51 group
-in this image was larger/smaller than this, or if the image was
-larger/smaller, or if we had used a different configuration, we would go
-deeper/shallower.
+In interpreting this value, you should just have in mind that NoiseChisel 
works based on the contiguity of signal in the pixels.
+Therefore the larger the object, the deeper NoiseChisel can carve it out of 
the noise.
+In other words, this reported depth, is only for this particular object and 
dataset, processed with this particular NoiseChisel configuration: if the M51 
group in this image was larger/smaller than this, or if the image was 
larger/smaller, or if we had used a different configuration, we would go 
deeper/shallower.
 
-To avoid typing all these options every time you run NoiseChisel on this
-image, you can use Gnuastro's configuration files, see @ref{Configuration
-files}. For an applied example of setting/using them, see @ref{Option
-management and configuration files}.
+To avoid typing all these options every time you run NoiseChisel on this 
image, you can use Gnuastro's configuration files, see @ref{Configuration 
files}.
+For an applied example of setting/using them, see @ref{Option management and 
configuration files}.
 
-To continue your analysis of such datasets with extended emission, you can
-use @ref{Segment} to identify all the ``clumps'' over the diffuse regions:
-background galaxies and foreground stars.
+To continue your analysis of such datasets with extended emission, you can use 
@ref{Segment} to identify all the ``clumps'' over the diffuse regions: 
background galaxies and foreground stars.
 
 @example
 $ astsegment r_detected.fits
@@ -4853,25 +3716,17 @@ $ astsegment r_detected.fits
 
 @cindex DS9
 @cindex SAO DS9
-Open the output @file{r_detected_segmented.fits} as a multi-extension data
-cube like before and flip through the first and second extensions to see
-the detected clumps (all pixels with a value larger than 1). To optimize
-the parameters and make sure you have detected what you wanted, its highly
-recommended to visually inspect the detected clumps on the input image.
-
-For visual inspection, you can make a simple shell script like below. It
-will first call MakeCatalog to estimate the positions of the clumps, then
-make an SAO ds9 region file and open ds9 with the image and region
-file. Recall that in a shell script, the numeric variables (like @code{$1},
-@code{$2}, and @code{$3} in the example below) represent the arguments
-given to the script. But when used in the AWK arguments, they refer to
-column numbers.
-
-To create the shell script, using your favorite text editor, put the
-contents below into a file called @file{check-clumps.sh}. Recall that
-everything after a @code{#} is just comments to help you understand the
-command (so read them!). Also note that if you are copying from the PDF
-version of this book, fix the single quotes in the AWK command.
+Open the output @file{r_detected_segmented.fits} as a multi-extension data 
cube like before and flip through the first and second extensions to see the 
detected clumps (all pixels with a value larger than 1).
+To optimize the parameters and make sure you have detected what you wanted, 
its highly recommended to visually inspect the detected clumps on the input 
image.
+
+For visual inspection, you can make a simple shell script like below.
+It will first call MakeCatalog to estimate the positions of the clumps, then 
make an SAO ds9 region file and open ds9 with the image and region file.
+Recall that in a shell script, the numeric variables (like @code{$1}, 
@code{$2}, and @code{$3} in the example below) represent the arguments given to 
the script.
+But when used in the AWK arguments, they refer to column numbers.
+
+To create the shell script, using your favorite text editor, put the contents 
below into a file called @file{check-clumps.sh}.
+Recall that everything after a @code{#} is just comments to help you 
understand the command (so read them!).
+Also note that if you are copying from the PDF version of this book, fix the 
single quotes in the AWK command.
 
 @example
 #! /bin/bash
@@ -4900,9 +3755,8 @@ rm $1"_cat.fits" $1.reg
 @end example
 
 @noindent
-Finally, you just have to activate the script's executable flag with the
-command below. This will enable you to directly/easily call the script as a
-command.
+Finally, you just have to activate the script's executable flag with the 
command below.
+This will enable you to directly/easily call the script as a command.
 
 @example
 $ chmod +x check-clumps.sh
@@ -4910,72 +3764,47 @@ $ chmod +x check-clumps.sh
 
 @cindex AWK
 @cindex GNU AWK
-This script doesn't expect the @file{.fits} suffix of the input's filename
-as the first argument. Because the script produces intermediate files (a
-catalog and DS9 region file, which are later deleted). However, we don't
-want multiple instances of the script (on different files in the same
-directory) to collide (read/write to the same intermediate
-files). Therefore, we have used suffixes added to the input's name to
-identify the intermediate files. Note how all the @code{$1} instances in
-the commands (not within the AWK command@footnote{In AWK, @code{$1} refers
-to the first column, while in the shell script, it refers to the first
-argument.}) are followed by a suffix. If you want to keep the intermediate
-files, put a @code{#} at the start of the last line.
-
-The few, but high-valued, bright pixels in the central parts of the
-galaxies can hinder easy visual inspection of the fainter parts of the
-image. With the second and third arguments to this script, you can set the
-numerical values of the color map (first is minimum/black, second is
-maximum/white). You can call this script with any@footnote{Some
-modifications are necessary based on the input dataset: depending on the
-dynamic range, you have to adjust the second and third arguments. But more
-importantly, depending on the dataset's world coordinate system, you have
-to change the region @code{width}, in the AWK command. Otherwise the circle
-regions can be too small/large.} output of Segment (when
-@option{--rawoutput} is @emph{not} used) with a command like this:
+This script doesn't expect the @file{.fits} suffix of the input's filename as 
the first argument.
+Because the script produces intermediate files (a catalog and DS9 region file, 
which are later deleted).
+However, we don't want multiple instances of the script (on different files in 
the same directory) to collide (read/write to the same intermediate files).
+Therefore, we have used suffixes added to the input's name to identify the 
intermediate files.
+Note how all the @code{$1} instances in the commands (not within the AWK 
command@footnote{In AWK, @code{$1} refers to the first column, while in the 
shell script, it refers to the first argument.}) are followed by a suffix.
+If you want to keep the intermediate files, put a @code{#} at the start of the 
last line.
+
+The few, but high-valued, bright pixels in the central parts of the galaxies 
can hinder easy visual inspection of the fainter parts of the image.
+With the second and third arguments to this script, you can set the numerical 
values of the color map (first is minimum/black, second is maximum/white).
+You can call this script with any@footnote{Some modifications are necessary 
based on the input dataset: depending on the dynamic range, you have to adjust 
the second and third arguments.
+But more importantly, depending on the dataset's world coordinate system, you 
have to change the region @code{width}, in the AWK command.
+Otherwise the circle regions can be too small/large.} output of Segment (when 
@option{--rawoutput} is @emph{not} used) with a command like this:
 
 @example
 $ ./check-clumps.sh r_detected_segmented -0.1 2
 @end example
 
-Go ahead and run this command. You will see the intermediate processing
-being done and finally it opens SAO DS9 for you with the regions
-superimposed on all the extensions of Segment's output. The script will
-only finish (and give you control of the command-line) when you close
-DS9. If you need your access to the command-line before closing DS9, add a
-@code{&} after the end of the command above.
+Go ahead and run this command.
+You will see the intermediate processing being done and finally it opens SAO 
DS9 for you with the regions superimposed on all the extensions of Segment's 
output.
+The script will only finish (and give you control of the command-line) when 
you close DS9.
+If you need your access to the command-line before closing DS9, add a @code{&} 
after the end of the command above.
 
 @cindex Purity
 @cindex Completeness
-While DS9 is open, slide the dynamic range (values for black and white, or
-minimum/maximum values in different color schemes) and zoom into various
-regions of the M51 group to see if you are satisfied with the detected
-clumps. Don't forget that through the ``Cube'' window that is opened along
-with DS9, you can flip through the extensions and see the actual clumps
-also. The questions you should be asking your self are these: 1) Which real
-clumps (as you visually @emph{feel}) have been missed? In other words, is
-the @emph{completeness} good? 2) Are there any clumps which you @emph{feel}
-are false? In other words, is the @emph{purity} good?
-
-Note that completeness and purity are not independent of each other, they
-are anti-correlated: the higher your purity, the lower your completeness
-and vice-versa. You can see this by playing with the purity level using the
-@option{--snquant} option. Run Segment as shown above again with @code{-P}
-and see its default value. Then increase/decrease it for higher/lower
-purity and check the result as before. You will see that if you want the
-best purity, you have to sacrifice completeness and vice versa.
-
-One interesting region to inspect in this image is the many bright peaks
-around the central parts of M51. Zoom into that region and inspect how many
-of them have actually been detected as true clumps. Do you have a good
-balance between completeness and purity? Also look out far into the wings
-of the group and inspect the completeness and purity there.
-
-An easier way to inspect completeness (and only completeness) is to mask all
-the pixels detected as clumps and visually inspecting the rest of the
-pixels. You can do this using Arithmetic in a command like below. For easy
-reading of the command, we'll define the shell variable @code{i} for the
-image name and save the output in @file{masked.fits}.
+While DS9 is open, slide the dynamic range (values for black and white, or 
minimum/maximum values in different color schemes) and zoom into various 
regions of the M51 group to see if you are satisfied with the detected clumps.
+Don't forget that through the ``Cube'' window that is opened along with DS9, 
you can flip through the extensions and see the actual clumps also.
+The questions you should be asking your self are these: 1) Which real clumps 
(as you visually @emph{feel}) have been missed? In other words, is the 
@emph{completeness} good? 2) Are there any clumps which you @emph{feel} are 
false? In other words, is the @emph{purity} good?
+
+Note that completeness and purity are not independent of each other, they are 
anti-correlated: the higher your purity, the lower your completeness and 
vice-versa.
+You can see this by playing with the purity level using the @option{--snquant} 
option.
+Run Segment as shown above again with @code{-P} and see its default value.
+Then increase/decrease it for higher/lower purity and check the result as 
before.
+You will see that if you want the best purity, you have to sacrifice 
completeness and vice versa.
+
+One interesting region to inspect in this image is the many bright peaks 
around the central parts of M51.
+Zoom into that region and inspect how many of them have actually been detected 
as true clumps.
+Do you have a good balance between completeness and purity? Also look out far 
into the wings of the group and inspect the completeness and purity there.
+
+An easier way to inspect completeness (and only completeness) is to mask all 
the pixels detected as clumps and visually inspecting the rest of the pixels.
+You can do this using Arithmetic in a command like below.
+For easy reading of the command, we'll define the shell variable @code{i} for 
the image name and save the output in @file{masked.fits}.
 
 @example
 $ in="r_detected_segmented.fits -hINPUT"
@@ -4983,33 +3812,23 @@ $ clumps="r_detected_segmented.fits -hCLUMPS"
 $ astarithmetic $in $clumps 0 gt nan where -oclumps-masked.fits
 @end example
 
-Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks
-that have been missed, especially as you go farther away from the group
-center and into the diffuse wings. This is due to the fact that with this
-configuration, we have focused more on the sharper clumps. To put the focus
-more on diffuse clumps, you can use a wider convolution kernel. Using a
-larger kernel can also help in detecting the existing clumps to fainter
-levels (thus better separating them from the surrounding diffuse signal).
+Inspecting @file{clumps-masked.fits}, you can see some very diffuse peaks that 
have been missed, especially as you go farther away from the group center and 
into the diffuse wings.
+This is due to the fact that with this configuration, we have focused more on 
the sharper clumps.
+To put the focus more on diffuse clumps, you can use a wider convolution 
kernel.
+Using a larger kernel can also help in detecting the existing clumps to 
fainter levels (thus better separating them from the surrounding diffuse 
signal).
 
-You can make any kernel easily using the @option{--kernel} option in
-@ref{MakeProfiles}. But note that a larger kernel is also going to wash-out
-many of the sharp/small clumps close to the center of M51 and also some
-smaller peaks on the wings. Please continue playing with Segment's
-configuration to obtain a more complete result (while keeping reasonable
-purity). We'll finish the discussion on finding true clumps at this point.
+You can make any kernel easily using the @option{--kernel} option in 
@ref{MakeProfiles}.
+But note that a larger kernel is also going to wash-out many of the 
sharp/small clumps close to the center of M51 and also some smaller peaks on 
the wings.
+Please continue playing with Segment's configuration to obtain a more complete 
result (while keeping reasonable purity).
+We'll finish the discussion on finding true clumps at this point.
 
-The properties of the clumps within M51, or the background objects can then
-easily be measured using @ref{MakeCatalog}. To measure the properties of
-the background objects (detected as clumps over the diffuse region), you
-shouldn't mask the diffuse region. When measuring clump properties with
-@ref{MakeCatalog} and using the @option{--clumpscat}, the ambient flux
-(from the diffuse region) is calculated and subtracted. If the diffuse
-region is masked, its effect on the clump brightness cannot be calculated
-and subtracted.
+The properties of the clumps within M51, or the background objects can then 
easily be measured using @ref{MakeCatalog}.
+To measure the properties of the background objects (detected as clumps over 
the diffuse region), you shouldn't mask the diffuse region.
+When measuring clump properties with @ref{MakeCatalog} and using the 
@option{--clumpscat}, the ambient flux (from the diffuse region) is calculated 
and subtracted.
+If the diffuse region is masked, its effect on the clump brightness cannot be 
calculated and subtracted.
 
-To keep this tutorial short, we'll stop here. See @ref{Segmentation and
-making a catalog} and @ref{Segment} for more on using Segment, producing
-catalogs with MakeCatalog and using those catalogs.
+To keep this tutorial short, we'll stop here.
+See @ref{Segmentation and making a catalog} and @ref{Segment} for more on 
using Segment, producing catalogs with MakeCatalog and using those catalogs.
 
 
 
@@ -5027,44 +3846,25 @@ catalogs with MakeCatalog and using those catalogs.
 @c were seen to follow this ``Installation'' chapter title in search of the
 @c tarball and fast instructions.
 @cindex Installation
-The latest released version of Gnuastro source code is always available at
-the following URL:
+The latest released version of Gnuastro source code is always available at the 
following URL:
 
 @url{http://ftpmirror.gnu.org/gnuastro/gnuastro-latest.tar.gz}
 
 @noindent
-@ref{Quick start} describes the commands necessary to configure, build, and
-install Gnuastro on your system. This chapter will be useful in cases where
-the simple procedure above is not sufficient, for example your system lacks
-a mandatory/optional dependency (in other words, you can't pass the
-@command{$ ./configure} step), or you want greater customization, or you
-want to build and install Gnuastro from other random points in its history,
-or you want a higher level of control on the installation. Thus if you were
-happy with downloading the tarball and following @ref{Quick start}, then
-you can safely ignore this chapter and come back to it in the future if you
-need more customization.
-
-@ref{Dependencies} describes the mandatory, optional and bootstrapping
-dependencies of Gnuastro. Only the first group are required/mandatory when
-you are building Gnuastro using a tarball (see @ref{Release tarball}), they
-are very basic and low-level tools used in most astronomical software, so
-you might already have them installed, if not they are very easy to install
-as described for each. @ref{Downloading the source} discusses the two
-methods you can obtain the source code: as a tarball (a significant
-snapshot in Gnuastro's history), or the full
-history@footnote{@ref{Bootstrapping dependencies} are required if you clone
-the full history.}. The latter allows you to build Gnuastro at any random
-point in its history (for example to get bug fixes or new features that are
-not released as a tarball yet).
-
-The building and installation of Gnuastro is heavily customizable, to learn
-more about them, see @ref{Build and install}. This section is essentially a
-thorough explanation of the steps in @ref{Quick start}. It discusses ways
-you can influence the building and installation. If you encounter any
-problems in the installation process, it is probably already explained in
-@ref{Known issues}. In @ref{Other useful software} the installation and
-usage of some other free software that are not directly required by
-Gnuastro but might be useful in conjunction with it is discussed.
+@ref{Quick start} describes the commands necessary to configure, build, and 
install Gnuastro on your system.
+This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example your system lacks a mandatory/optional dependency (in 
other words, you can't pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
+Thus if you were happy with downloading the tarball and following @ref{Quick 
start}, then you can safely ignore this chapter and come back to it in the 
future if you need more customization.
+
+@ref{Dependencies} describes the mandatory, optional and bootstrapping 
dependencies of Gnuastro.
+Only the first group are required/mandatory when you are building Gnuastro 
using a tarball (see @ref{Release tarball}), they are very basic and low-level 
tools used in most astronomical software, so you might already have them 
installed, if not they are very easy to install as described for each.
+@ref{Downloading the source} discusses the two methods you can obtain the 
source code: as a tarball (a significant snapshot in Gnuastro's history), or 
the full history@footnote{@ref{Bootstrapping dependencies} are required if you 
clone the full history.}.
+The latter allows you to build Gnuastro at any random point in its history 
(for example to get bug fixes or new features that are not released as a 
tarball yet).
+
+The building and installation of Gnuastro is heavily customizable, to learn 
more about them, see @ref{Build and install}.
+This section is essentially a thorough explanation of the steps in @ref{Quick 
start}.
+It discusses ways you can influence the building and installation.
+If you encounter any problems in the installation process, it is probably 
already explained in @ref{Known issues}.
+In @ref{Other useful software} the installation and usage of some other free 
software that are not directly required by Gnuastro but might be useful in 
conjunction with it is discussed.
 
 
 @menu
@@ -5079,25 +3879,16 @@ Gnuastro but might be useful in conjunction with it is 
discussed.
 @node Dependencies, Downloading the source, Installation, Installation
 @section Dependencies
 
-A minimal set of dependencies are mandatory for building Gnuastro from the
-standard tarball release. If they are not present you cannot pass
-Gnuastro's configuration step. The mandatory dependencies are therefore
-very basic (low-level) tools which are easy to obtain, build and install,
-see @ref{Mandatory dependencies} for a full discussion.
-
-If you have the packages of @ref{Optional dependencies}, Gnuastro will have
-additional functionality (for example converting FITS images to JPEG or
-PDF). If you are installing from a tarball as explained in @ref{Quick
-start}, you can stop reading after this section. If you are cloning the
-version controlled source (see @ref{Version controlled source}), an
-additional bootstrapping step is required before configuration and its
-dependencies are explained in @ref{Bootstrapping dependencies}.
-
-Your operating system's package manager is an easy and convenient way to
-download and install the dependencies that are already pre-built for your
-operating system. In @ref{Dependencies from package managers}, we'll list
-some common operating system package manager commands to install the
-optional and mandatory dependencies.
+A minimal set of dependencies are mandatory for building Gnuastro from the 
standard tarball release.
+If they are not present you cannot pass Gnuastro's configuration step.
+The mandatory dependencies are therefore very basic (low-level) tools which 
are easy to obtain, build and install, see @ref{Mandatory dependencies} for a 
full discussion.
+
+If you have the packages of @ref{Optional dependencies}, Gnuastro will have 
additional functionality (for example converting FITS images to JPEG or PDF).
+If you are installing from a tarball as explained in @ref{Quick start}, you 
can stop reading after this section.
+If you are cloning the version controlled source (see @ref{Version controlled 
source}), an additional bootstrapping step is required before configuration and 
its dependencies are explained in @ref{Bootstrapping dependencies}.
+
+Your operating system's package manager is an easy and convenient way to 
download and install the dependencies that are already pre-built for your 
operating system.
+In @ref{Dependencies from package managers}, we'll list some common operating 
system package manager commands to install the optional and mandatory 
dependencies.
 
 @menu
 * Mandatory dependencies::      Gnuastro will not install without these.
@@ -5111,11 +3902,9 @@ optional and mandatory dependencies.
 
 @cindex Dependencies, Gnuastro
 @cindex GNU build system
-The mandatory Gnuastro dependencies are very basic and low-level
-tools. They all follow the same basic GNU based build system (like that
-shown in @ref{Quick start}), so even if you don't have them, installing
-them should be pretty straightforward. In this section we explain each
-program and any specific note that might be necessary in the installation.
+The mandatory Gnuastro dependencies are very basic and low-level tools.
+They all follow the same basic GNU based build system (like that shown in 
@ref{Quick start}), so even if you don't have them, installing them should be 
pretty straightforward.
+In this section we explain each program and any specific note that might be 
necessary in the installation.
 
 
 @menu
@@ -5128,13 +3917,8 @@ program and any specific note that might be necessary in 
the installation.
 @subsubsection GNU Scientific library
 
 @cindex GNU Scientific Library
-The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL,
-is a large collection of functions that are very useful in scientific
-applications, for example integration, random number generation, and Fast
-Fourier Transform among many others. To install GSL from source, you can
-run the following commands after you have downloaded
-@url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz,
-@file{gsl-latest.tar.gz}}:
+The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is 
a large collection of functions that are very useful in scientific 
applications, for example integration, random number generation, and Fast 
Fourier Transform among many others.
+To install GSL from source, you can run the following commands after you have 
downloaded @url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz, 
@file{gsl-latest.tar.gz}}:
 
 @example
 $ tar xf gsl-latest.tar.gz
@@ -5150,55 +3934,33 @@ $ sudo make install
 
 @cindex CFITSIO
 @cindex FITS standard
-@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can
-get to the pixels in a FITS image while remaining faithful to the
-@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}. It is
-written by William Pence, the principal author of the FITS
-standard@footnote{Pence, W.D. et al. Definition of the Flexible Image
-Transport System (FITS), version 3.0. (2010) Astronomy and Astrophysics,
-Volume 524, id.A42, 40 pp.}, and is regularly updated. Setting the
-definitions for all other software packages using FITS images.
+@url{http://heasarc.gsfc.nasa.gov/fitsio/, CFITSIO} is the closest you can get 
to the pixels in a FITS image while remaining faithful to the 
@url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS standard}.
+It is written by William Pence, the principal author of the FITS 
standard@footnote{Pence, W.D. et al. Definition of the Flexible Image Transport 
System (FITS), version 3.0. (2010) Astronomy and Astrophysics, Volume 524, 
id.A42, 40 pp.}, and is regularly updated.
+Setting the definitions for all other software packages using FITS images.
 
 @vindex --enable-reentrant
 @cindex Reentrancy, multiple file opening
 @cindex Multiple file opening, reentrancy
-Some GNU/Linux distributions have CFITSIO in their package managers, if it
-is available and updated, you can use it. One problem that might occur is
-that CFITSIO might not be configured with the @option{--enable-reentrant}
-option by the distribution. This option allows CFITSIO to open a file in
-multiple threads, it can thus provide great speed improvements. If CFITSIO
-was not configured with this option, any program which needs this
-capability will warn you and abort when you ask for multiple threads (see
-@ref{Multi-threaded operations}).
-
-To install CFITSIO from source, we strongly recommend that you have a look
-through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and
-understand the options you can pass to @command{$ ./configure} (they aren't
-too much). This is a very basic package for most astronomical software and
-it is best that you configure it nicely with your system. Once you download
-the source and unpack it, the following configure script should be enough
-for most purposes. Don't forget to read chapter two of the manual though,
-for example the second option is only for 64bit systems. The manual also
-explains how to check if it has been installed correctly.
-
-CFITSIO comes with two executable files called @command{fpack} and
-@command{funpack}. From their manual: they ``are standalone programs for
-compressing and uncompressing images and tables that are stored in the FITS
-(Flexible Image Transport System) data format. They are analogous to the
-gzip and gunzip compression programs except that they are optimized for the
-types of astronomical images that are often stored in FITS format''. The
-commands below will compile and install them on your system along with
-CFITSIO. They are not essential for Gnuastro, since they are just wrappers
-for functions within CFITSIO, but they can come in handy. The @command{make
-utils} command is only available for versions above 3.39, it will build
-these executable files along with several other executable test files which
-are deleted in the following commands before the installation (otherwise
-the test files will also be installed).
-
-The commands necessary to decompress, build and install CFITSIO from source
-are described below. Let's assume you have downloaded
-@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz,
-@file{cfitsio_latest.tar.gz}} and are in the same directory:
+Some GNU/Linux distributions have CFITSIO in their package managers, if it is 
available and updated, you can use it.
+One problem that might occur is that CFITSIO might not be configured with the 
@option{--enable-reentrant} option by the distribution.
+This option allows CFITSIO to open a file in multiple threads, it can thus 
provide great speed improvements.
+If CFITSIO was not configured with this option, any program which needs this 
capability will warn you and abort when you ask for multiple threads (see 
@ref{Multi-threaded operations}).
+
+To install CFITSIO from source, we strongly recommend that you have a look 
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and 
understand the options you can pass to @command{$ ./configure} (they aren't too 
much).
+This is a very basic package for most astronomical software and it is best 
that you configure it nicely with your system.
+Once you download the source and unpack it, the following configure script 
should be enough for most purposes.
+Don't forget to read chapter two of the manual though, for example the second 
option is only for 64bit systems.
+The manual also explains how to check if it has been installed correctly.
+
+CFITSIO comes with two executable files called @command{fpack} and 
@command{funpack}.
+From their manual: they ``are standalone programs for compressing and 
uncompressing images and tables that are stored in the FITS (Flexible Image 
Transport System) data format.
+They are analogous to the gzip and gunzip compression programs except that 
they are optimized for the types of astronomical images that are often stored 
in FITS format''.
+The commands below will compile and install them on your system along with 
CFITSIO.
+They are not essential for Gnuastro, since they are just wrappers for 
functions within CFITSIO, but they can come in handy.
+The @command{make utils} command is only available for versions above 3.39, it 
will build these executable files along with several other executable test 
files which are deleted in the following commands before the installation 
(otherwise the test files will also be installed).
+
+The commands necessary to decompress, build and install CFITSIO from source 
are described below.
+Let's assume you have downloaded 
@url{http://heasarc.gsfc.nasa.gov/FTP/software/fitsio/c/cfitsio_latest.tar.gz, 
@file{cfitsio_latest.tar.gz}} and are in the same directory:
 
 @example
 $ tar xf cfitsio_latest.tar.gz
@@ -5223,40 +3985,23 @@ $ sudo make install
 @cindex WCS
 @cindex WCSLIB
 @cindex World Coordinate System
-@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and
-maintained by one of the authors of the World Coordinate System (WCS)
-definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS
-standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of
-world coordinates in FITS. Astronomy and Astrophysics, 395, 1061-1075.},
-Mark Calabretta. It might be already built and ready in your distribution's
-package management system. However, here the installation from source is
-explained, for the advantages of installation from source please see
-@ref{Mandatory dependencies}. To install WCSLIB you will need to have
-CFITSIO already installed, see @ref{CFITSIO}.
+@url{http://www.atnf.csiro.au/people/mcalabre/WCS/, WCSLIB} is written and 
maintained by one of the authors of the World Coordinate System (WCS) 
definition in the @url{http://fits.gsfc.nasa.gov/fits_standard.html, FITS 
standard}@footnote{Greisen E.W., Calabretta M.R. (2002) Representation of world 
coordinates in FITS.
+Astronomy and Astrophysics, 395, 1061-1075.}, Mark Calabretta.
+It might be already built and ready in your distribution's package management 
system.
+However, here the installation from source is explained, for the advantages of 
installation from source please see @ref{Mandatory dependencies}.
+To install WCSLIB you will need to have CFITSIO already installed, see 
@ref{CFITSIO}.
 
 @vindex --without-pgplot
-WCSLIB also has plotting capabilities which use PGPLOT (a plotting library
-for C). If you wan to use those capabilities in WCSLIB, @ref{PGPLOT}
-provides the PGPLOT installation instructions. However PGPLOT is
-old@footnote{As of early June 2016, its most recent version was uploaded in
-February 2001.}, so its installation is not easy, there are also many great
-modern WCS plotting tools (mostly in written in Python). Hence, if you will
-not be using those plotting functions in WCSLIB, you can configure it with
-the @option{--without-pgplot} option as shown below.
-
-If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your
-system and you installed CFITSIO version 3.42 or later, you will need to
-also link with the cURL library at configure time (through the
-@code{-lcurl} option as shown below). CFITSIO uses the cURL library for its
-HTTPS (or HTTP Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}})
-support and if it is present on your system, CFITSIO will depend on
-it. Therefore, if @command{./configure} command below fails (you don't have
-the cURL library), then remove this option and rerun it.
-
-Let's assume you have downloaded
-@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2,
-@file{wcslib.tar.bz2}} and are in the same directory, to configure, build,
-check and install WCSLIB follow the steps below.
+WCSLIB also has plotting capabilities which use PGPLOT (a plotting library for 
C).
+If you wan to use those capabilities in WCSLIB, @ref{PGPLOT} provides the 
PGPLOT installation instructions.
+However PGPLOT is old@footnote{As of early June 2016, its most recent version 
was uploaded in February 2001.}, so its installation is not easy, there are 
also many great modern WCS plotting tools (mostly in written in Python).
+Hence, if you will not be using those plotting functions in WCSLIB, you can 
configure it with the @option{--without-pgplot} option as shown below.
+
+If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your 
system and you installed CFITSIO version 3.42 or later, you will need to also 
link with the cURL library at configure time (through the @code{-lcurl} option 
as shown below).
+CFITSIO uses the cURL library for its HTTPS (or HTTP 
Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it 
is present on your system, CFITSIO will depend on it.
+Therefore, if @command{./configure} command below fails (you don't have the 
cURL library), then remove this option and rerun it.
+
+Let's assume you have downloaded 
@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2, 
@file{wcslib.tar.bz2}} and are in the same directory, to configure, build, 
check and install WCSLIB follow the steps below.
 @example
 $ tar xf wcslib.tar.bz2
 
@@ -5276,102 +4021,68 @@ $ sudo make install
 @node Optional dependencies, Bootstrapping dependencies, Mandatory 
dependencies, Dependencies
 @subsection Optional dependencies
 
-The libraries listed here are only used for very specific applications,
-therefore if you don't want these operations, Gnuastro will be built and
-installed without them and you don't have to have the dependencies.
+The libraries listed here are only used for very specific applications, 
therefore if you don't want these operations, Gnuastro will be built and 
installed without them and you don't have to have the dependencies.
 
 @cindex GPL Ghostscript
-If the @command{./configure} script can't find these requirements, it will
-warn you in the end that they are not present and notify you of the
-operation(s) you can't do due to not having them. If the output you request
-from a program requires a missing library, that program is going to warn
-you and abort. In the case of program dependencies (like GPL GhostScript),
-if you install them at a later time, the program will run. This is because
-if required libraries are not present at build time, the executables cannot
-be built, but an executable is called by the built program at run time so
-if it becomes available, it will be used. If you do install an optional
-library later, you will have to rebuild Gnuastro and reinstall it for it to
-take effect.
+If the @command{./configure} script can't find these requirements, it will 
warn you in the end that they are not present and notify you of the 
operation(s) you can't do due to not having them.
+If the output you request from a program requires a missing library, that 
program is going to warn you and abort.
+In the case of program dependencies (like GPL GhostScript), if you install 
them at a later time, the program will run.
+This is because if required libraries are not present at build time, the 
executables cannot be built, but an executable is called by the built program 
at run time so if it becomes available, it will be used.
+If you do install an optional library later, you will have to rebuild Gnuastro 
and reinstall it for it to take effect.
 
 @table @asis
 
 @item GNU Libtool
 @cindex GNU Libtool
-Libtool is a program to simplify managing of the libraries to build an
-executable (a program). GNU Libtool has some added functionality compared
-to other implementations. If GNU Libtool isn't present on your system at
-configuration time, a warning will be printed and @ref{BuildProgram} won't
-be built or installed. The configure script will look into your search path
-(@code{PATH}) for GNU Libtool through the following executable names:
-@command{libtool} (acceptable only if it is the GNU implementation) or
-@command{glibtool}. See @ref{Installation directory} for more on
-@code{PATH}.
-
-GNU Libtool (the binary/executable file) is a low-level program that is
-probably already present on your system, and if not, is available in your
-operating system package manager@footnote{Note that we want the
-binary/executable Libtool program which can be run on the command-line. In
-Debian-based operating systems which separate various parts of a package,
-you want want @code{libtool-bin}, the @code{libtool} package won't contain
-the executable program.}. If you want to install GNU Libtool's latest
-version from source, please visit its
-@url{https://www.gnu.org/software/libtool/, webpage}.
-
-Gnuastro's tarball is shipped with an internal implementation of GNU
-Libtool. Even if you have GNU Libtool, Gnuastro's internal implementation
-is used for the building and installation of Gnuastro. As a result, you can
-still build, install and use Gnuastro even if you don't have GNU Libtool
-installed on your system. However, this internal Libtool does not get
-installed. Therefore, after Gnuastro's installation, if you want to use
-@ref{BuildProgram} to compile and link your own C source code which uses
-the @ref{Gnuastro library}, you need to have GNU Libtool available on your
-system (independent of Gnuastro). See @ref{Review of library fundamentals}
-to learn more about libraries.
+Libtool is a program to simplify managing of the libraries to build an 
executable (a program).
+GNU Libtool has some added functionality compared to other implementations.
+If GNU Libtool isn't present on your system at configuration time, a warning 
will be printed and @ref{BuildProgram} won't be built or installed.
+The configure script will look into your search path (@code{PATH}) for GNU 
Libtool through the following executable names: @command{libtool} (acceptable 
only if it is the GNU implementation) or @command{glibtool}.
+See @ref{Installation directory} for more on @code{PATH}.
+
+GNU Libtool (the binary/executable file) is a low-level program that is 
probably already present on your system, and if not, is available in your 
operating system package manager@footnote{Note that we want the 
binary/executable Libtool program which can be run on the command-line.
+In Debian-based operating systems which separate various parts of a package, 
you want want @code{libtool-bin}, the @code{libtool} package won't contain the 
executable program.}.
+If you want to install GNU Libtool's latest version from source, please visit 
its @url{https://www.gnu.org/software/libtool/, webpage}.
+
+Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
+Even if you have GNU Libtool, Gnuastro's internal implementation is used for 
the building and installation of Gnuastro.
+As a result, you can still build, install and use Gnuastro even if you don't 
have GNU Libtool installed on your system.
+However, this internal Libtool does not get installed.
+Therefore, after Gnuastro's installation, if you want to use 
@ref{BuildProgram} to compile and link your own C source code which uses the 
@ref{Gnuastro library}, you need to have GNU Libtool available on your system 
(independent of Gnuastro).
+See @ref{Review of library fundamentals} to learn more about libraries.
 
 @item libgit2
 @cindex Git
 @pindex libgit2
 @cindex Version control systems
-Git is one of the most common version control systems (see @ref{Version
-controlled source}). When @file{libgit2} is present, and Gnuastro's
-programs are run within a version controlled directory, outputs will
-contain the version number of the working directory's repository for future
-reproducibility. See the @command{COMMIT} keyword header in @ref{Output
-FITS files} for a discussion.
+Git is one of the most common version control systems (see @ref{Version 
controlled source}).
+When @file{libgit2} is present, and Gnuastro's programs are run within a 
version controlled directory, outputs will contain the version number of the 
working directory's repository for future reproducibility.
+See the @command{COMMIT} keyword header in @ref{Output FITS files} for a 
discussion.
 
 @item libjpeg
 @pindex libjpeg
 @cindex JPEG format
-libjpeg is only used by ConvertType to read from and write to JPEG images,
-see @ref{Recognized file formats}. @url{http://www.ijg.org/, libjpeg} is a
-very basic library that provides tools to read and write JPEG images, most
-Unix-like graphic programs and libraries use it. Therefore you most
-probably already have it installed.
-@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative
-to libjpeg. It uses Single instruction, multiple data (SIMD) instructions
-for ARM based systems that significantly decreases the processing time of
-JPEG compression and decompression algorithms.
+libjpeg is only used by ConvertType to read from and write to JPEG images, see 
@ref{Recognized file formats}.
+@url{http://www.ijg.org/, libjpeg} is a very basic library that provides tools 
to read and write JPEG images, most Unix-like graphic programs and libraries 
use it.
+Therefore you most probably already have it installed.
+@url{http://libjpeg-turbo.virtualgl.org/, libjpeg-turbo} is an alternative to 
libjpeg.
+It uses Single instruction, multiple data (SIMD) instructions for ARM based 
systems that significantly decreases the processing time of JPEG compression 
and decompression algorithms.
 
 @item libtiff
 @pindex libtiff
 @cindex TIFF format
-libtiff is used by ConvertType and the libraries to read TIFF images, see
-@ref{Recognized file formats}. @url{http://www.simplesystems.org/libtiff/,
-libtiff} is a very basic library that provides tools to read and write TIFF
-images, most Unix-like operating system graphic programs and libraries use
-it. Therefore even if you don't have it installed, it must be easily
-available in your package manager.
+libtiff is used by ConvertType and the libraries to read TIFF images, see 
@ref{Recognized file formats}.
+@url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library 
that provides tools to read and write TIFF images, most Unix-like operating 
system graphic programs and libraries use it.
+Therefore even if you don't have it installed, it must be easily available in 
your package manager.
 
 
 @item GPL Ghostscript
 @cindex GPL Ghostscript
-GPL Ghostscript's executable (@command{gs}) is called by ConvertType to
-compile a PDF file from a source PostScript file, see
-@ref{ConvertType}. Therefore its headers (and libraries) are not
-needed. With a very high probability you already have it in your GNU/Linux
-distribution. Unfortunately it does not follow the standard GNU build style
-so installing it is very hard. It is best to rely on your distribution's
-package managers for this.
+GPL Ghostscript's executable (@command{gs}) is called by ConvertType to 
compile a PDF file from a source PostScript file, see @ref{ConvertType}.
+Therefore its headers (and libraries) are not needed.
+With a very high probability you already have it in your GNU/Linux 
distribution.
+Unfortunately it does not follow the standard GNU build style so installing it 
is very hard.
+It is best to rely on your distribution's package managers for this.
 
 @end table
 
@@ -5381,26 +4092,15 @@ package managers for this.
 @node Bootstrapping dependencies, Dependencies from package managers, Optional 
dependencies, Dependencies
 @subsection Bootstrapping dependencies
 
-Bootstrapping is only necessary if you have decided to obtain the full
-version controlled history of Gnuastro, see @ref{Version controlled source}
-and @ref{Bootstrapping}. Using the version controlled source enables you to
-always be up to date with the most recent development work of Gnuastro (bug
-fixes, new functionalities, improved algorithms, etc). If you have
-downloaded a tarball (see @ref{Downloading the source}), then you can
-ignore this subsection.
-
-To successfully run the bootstrapping process, there are some additional
-dependencies to those discussed in the previous subsections. These are low
-level tools that are used by a large collection of Unix-like operating
-systems programs, therefore they are most probably already available in
-your system. If they are not already installed, you should be able to
-easily find them in any GNU/Linux distribution package management system
-(@command{apt-get}, @command{yum}, @command{pacman}, etc). The short
-names in parenthesis in @command{typewriter} font after the package name
-can be used to search for them in your package manager. For the GNU
-Portability Library, GNU Autoconf Archive and @TeX{} Live, it is
-recommended to use the instructions here, not your operating system's
-package manager.
+Bootstrapping is only necessary if you have decided to obtain the full version 
controlled history of Gnuastro, see @ref{Version controlled source} and 
@ref{Bootstrapping}.
+Using the version controlled source enables you to always be up to date with 
the most recent development work of Gnuastro (bug fixes, new functionalities, 
improved algorithms, etc).
+If you have downloaded a tarball (see @ref{Downloading the source}), then you 
can ignore this subsection.
+
+To successfully run the bootstrapping process, there are some additional 
dependencies to those discussed in the previous subsections.
+These are low level tools that are used by a large collection of Unix-like 
operating systems programs, therefore they are most probably already available 
in your system.
+If they are not already installed, you should be able to easily find them in 
any GNU/Linux distribution package management system (@command{apt-get}, 
@command{yum}, @command{pacman}, etc).
+The short names in parenthesis in @command{typewriter} font after the package 
name can be used to search for them in your package manager.
+For the GNU Portability Library, GNU Autoconf Archive and @TeX{} Live, it is 
recommended to use the instructions here, not your operating system's package 
manager.
 
 @table @asis
 
@@ -5408,34 +4108,17 @@ package manager.
 @cindex GNU C library
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-To ensure portability for a wider range of operating systems (those that
-don't include GNU C library, namely glibc), Gnuastro depends on the GNU
-portability library, or Gnulib. Gnulib keeps a copy of all the functions in
-glibc, implemented (as much as possible) to be portable to other operating
-systems. The @file{bootstrap} script can automatically clone Gnulib (as a
-@file{gnulib/} directory inside Gnuastro), however, as described in
-@ref{Bootstrapping} this is not recommended.
-
-The recommended way to bootstrap Gnuastro is to first clone Gnulib and the
-Autoconf archives (see below) into a local directory outside of
-Gnuastro. Let's call it @file{DEVDIR}@footnote{If you are not a developer
-in Gnulib or Autoconf archives, @file{DEVDIR} can be a directory that you
-don't backup. In this way the large number of files in these projects won't
-slow down your backup process or take bandwidth (if you backup to a remote
-server).}  (which you can set to any directory). Currently in Gnuastro,
-both Gnulib and Autoconf archives have to be cloned in the same top
-directory@footnote{If you already have the Autoconf archives in a separate
-directory, or can't clone it in the same directory as Gnulib, or you have
-it with another directory name (not @file{autoconf-archive/}), you can
-follow this short step. Set @file{AUTOCONFARCHIVES} to your desired
-address. Then define a symbolic link in @file{DEVDIR} with the following
-command so Gnuastro's bootstrap script can find it:@*@command{$ ln -s
-$AUTOCONFARCHIVES $DEVDIR/autoconf-archive}.} like the case
-here@footnote{If your internet connection is active, but Git complains
-about the network, it might be due to your network setup not recognizing
-the git protocol. In that case use the following URL for the HTTP protocol
-instead (for Autoconf archives, replace the name):
-@command{http://git.sv.gnu.org/r/gnulib.git}}:
+To ensure portability for a wider range of operating systems (those that don't 
include GNU C library, namely glibc), Gnuastro depends on the GNU portability 
library, or Gnulib.
+Gnulib keeps a copy of all the functions in glibc, implemented (as much as 
possible) to be portable to other operating systems.
+The @file{bootstrap} script can automatically clone Gnulib (as a 
@file{gnulib/} directory inside Gnuastro), however, as described in 
@ref{Bootstrapping} this is not recommended.
+
+The recommended way to bootstrap Gnuastro is to first clone Gnulib and the 
Autoconf archives (see below) into a local directory outside of Gnuastro.
+Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or 
Autoconf archives, @file{DEVDIR} can be a directory that you don't backup.
+In this way the large number of files in these projects won't slow down your 
backup process or take bandwidth (if you backup to a remote server).}  (which 
you can set to any directory).
+Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in 
the same top directory@footnote{If you already have the Autoconf archives in a 
separate directory, or can't clone it in the same directory as Gnulib, or you 
have it with another directory name (not @file{autoconf-archive/}), you can 
follow this short step.
+Set @file{AUTOCONFARCHIVES} to your desired address.
+Then define a symbolic link in @file{DEVDIR} with the following command so 
Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES 
$DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet 
connection is active, but Git complains about the network, it might be due to 
your network setup not recognizing the git protocol.
+In that case use the following URL for the HTTP protocol instead (for Autoconf 
archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:
 
 @example
 $ DEVDIR=/home/yourname/Development
@@ -5445,35 +4128,28 @@ $ git clone git://git.sv.gnu.org/autoconf-archive.git
 @end example
 
 @noindent
-You now have the full version controlled source of these two repositories
-in separate directories. Both these packages are regularly updated, so
-every once in a while, you can run @command{$ git pull} within them to get
-any possible updates.
+You now have the full version controlled source of these two repositories in 
separate directories.
+Both these packages are regularly updated, so every once in a while, you can 
run @command{$ git pull} within them to get any possible updates.
 
 @item GNU Automake (@command{automake})
 @cindex GNU Automake
-GNU Automake will build the @file{Makefile.in} files in each sub-directory
-using the (hand-written) @file{Makefile.am} files. The @file{Makefile.in}s
-are subsequently used to generate the @file{Makefile}s when the user runs
-@command{./configure} before building.
+GNU Automake will build the @file{Makefile.in} files in each sub-directory 
using the (hand-written) @file{Makefile.am} files.
+The @file{Makefile.in}s are subsequently used to generate the @file{Makefile}s 
when the user runs @command{./configure} before building.
 
 @item GNU Autoconf (@command{autoconf})
 @cindex GNU Autoconf
-GNU Autoconf will build the @file{configure} script using the
-configurations we have defined (hand-written) in @file{configure.ac}.
+GNU Autoconf will build the @file{configure} script using the configurations 
we have defined (hand-written) in @file{configure.ac}.
 
 @item GNU Autoconf Archive
 @cindex GNU Autoconf Archive
-These are a large collection of tests that can be called to run at
-@command{./configure} time. See the explanation under GNU Portability
-Library above for instructions on obtaining it and keeping it up to date.
+These are a large collection of tests that can be called to run at 
@command{./configure} time.
+See the explanation under GNU Portability Library above for instructions on 
obtaining it and keeping it up to date.
 
 @item GNU Libtool (@command{libtool})
 @cindex GNU Libtool
-GNU Libtool is in charge of building all the libraries in Gnuastro. The
-libraries contain functions that are used by more than one program and are
-installed for use in other programs. They are thus put in a separate
-directory (@file{lib/}).
+GNU Libtool is in charge of building all the libraries in Gnuastro.
+The libraries contain functions that are used by more than one program and are 
installed for use in other programs.
+They are thus put in a separate directory (@file{lib/}).
 
 @item GNU help2man (@command{help2man})
 @cindex GNU help2man
@@ -5483,36 +4159,20 @@ GNU help2man is used to convert the output of the 
@option{--help} option
 @item @LaTeX{} and some @TeX{} packages
 @cindex @LaTeX{}
 @cindex @TeX{} Live
-Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ
-package). The @LaTeX{} source for those figures is version controlled for
-easy maintenance not the actual figures. So the @file{./boostrap} script
-will run @LaTeX{} to build the figures. The best way to install @LaTeX{}
-and all the necessary packages is through
-@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager
-for @TeX{} related tools that is independent of any operating system. It is
-thus preferred to the @TeX{} Live versions distributed by your operating
-system.
-
-To install @TeX{} Live, go to the webpage and download the appropriate
-installer by following the ``download'' link. Note that by default the full
-package repository will be downloaded and installed (around 4 Giga Bytes)
-which can take @emph{very} long to download and to update later. However,
-most packages are not needed by everyone, it is easier, faster and better
-to install only the ``Basic scheme'' (consisting of only the most basic
-@TeX{} and @LaTeX{} packages, which is less than 200 Mega
-bytes)@footnote{You can also download the DVD iso file at a later time to
-keep as a backup for when you don't have internet connection if you need a
-package.}.
-
-After the installation, be sure to set the environment variables as
-suggested in the end of the outputs. Any time you confront (need) a package
-you don't have, simply install it with a command like below (similar to how
-you install software from your operating system's package
-manager)@footnote{After running @TeX{}, or @LaTeX{}, you might get a
-warning complaining about a @file{missingfile}. Run `@command{tlmgr info
-missingfile}' to see the package(s) containing that file which you can
-install.}. To install all the necessary @TeX{} packages for a successful
-Gnuastro bootstrap, run this command:
+Some of the figures in this book are built by @LaTeX{} (using the PGF/TikZ 
package).
+The @LaTeX{} source for those figures is version controlled for easy 
maintenance not the actual figures.
+So the @file{./boostrap} script will run @LaTeX{} to build the figures.
+The best way to install @LaTeX{} and all the necessary packages is through 
@url{https://www.tug.org/texlive/, @TeX{} live} which is a package manager for 
@TeX{} related tools that is independent of any operating system.
+It is thus preferred to the @TeX{} Live versions distributed by your operating 
system.
+
+To install @TeX{} Live, go to the webpage and download the appropriate 
installer by following the ``download'' link.
+Note that by default the full package repository will be downloaded and 
installed (around 4 Giga Bytes) which can take @emph{very} long to download and 
to update later.
+However, most packages are not needed by everyone, it is easier, faster and 
better to install only the ``Basic scheme'' (consisting of only the most basic 
@TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You 
can also download the DVD iso file at a later time to keep as a backup for when 
you don't have internet connection if you need a package.}.
+
+After the installation, be sure to set the environment variables as suggested 
in the end of the outputs.
+Any time you confront (need) a package you don't have, simply install it with 
a command like below (similar to how you install software from your operating 
system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might 
get a warning complaining about a @file{missingfile}.
+Run `@command{tlmgr info missingfile}' to see the package(s) containing that 
file which you can install.}.
+To install all the necessary @TeX{} packages for a successful Gnuastro 
bootstrap, run this command:
 
 @example
 $ su
@@ -5523,9 +4183,8 @@ $ su
 
 @item ImageMagick (@command{imagemagick})
 @cindex ImageMagick
-ImageMagick is a wonderful and robust program for image manipulation on the
-command-line. @file{bootstrap} uses it to convert the book images into the
-formats necessary for the various book formats.
+ImageMagick is a wonderful and robust program for image manipulation on the 
command-line.
+@file{bootstrap} uses it to convert the book images into the formats necessary 
for the various book formats.
 
 @end table
 
@@ -5540,13 +4199,11 @@ formats necessary for the various book formats.
 @cindex Compiling from source
 @cindex Source code compilation
 @cindex Distributions, GNU/Linux
-The most basic way to install a package on your system is to build the
-packages from source yourself. Alternatively, you can use your operating
-system's package manager to download pre-compiled files and install
-them. The latter choice is easier and faster. However, we recommend that
-you build the @ref{Mandatory dependencies} yourself from source (all
-necessary commands and links are given in the respective section). Here are
-some basic reasons behind this recommendation.
+The most basic way to install a package on your system is to build the 
packages from source yourself.
+Alternatively, you can use your operating system's package manager to download 
pre-compiled files and install them.
+The latter choice is easier and faster.
+However, we recommend that you build the @ref{Mandatory dependencies} yourself 
from source (all necessary commands and links are given in the respective 
section).
+Here are some basic reasons behind this recommendation.
 
 @enumerate
 
@@ -5555,43 +4212,29 @@ Your distribution's pre-built package might not be the 
most recent
 release.
 
 @item
-For each package, Gnuastro might preform better (or require) certain
-configuration options that your distribution's package managers didn't add
-for you. If present, these configuration options are explained during the
-installation of each in the sections below (for example in
-@ref{CFITSIO}). When the proper configuration has not been set, the
-programs should complain and inform you.
+For each package, Gnuastro might preform better (or require) certain 
configuration options that your distribution's package managers didn't add for 
you.
+If present, these configuration options are explained during the installation 
of each in the sections below (for example in @ref{CFITSIO}).
+When the proper configuration has not been set, the programs should complain 
and inform you.
 
 @item
-For the libraries, they might separate the binary file from the header
-files which can cause confusion, see @ref{Known issues}.
+For the libraries, they might separate the binary file from the header files 
which can cause confusion, see @ref{Known issues}.
 
 @item
-Like any other tool, the science you derive from Gnuastro's tools highly
-depend on these lower level dependencies, so generally it is much better to
-have a close connection with them. By reading their manuals, installing
-them and staying up to date with changes/bugs in them, your scientific
-results and understanding (of what is going on, and thus how you interpret
-your scientific results) will also correspondingly improve.
+Like any other tool, the science you derive from Gnuastro's tools highly 
depend on these lower level dependencies, so generally it is much better to 
have a close connection with them.
+By reading their manuals, installing them and staying up to date with 
changes/bugs in them, your scientific results and understanding (of what is 
going on, and thus how you interpret your scientific results) will also 
correspondingly improve.
 @end enumerate
 
-Based on your package manager, you can use any of the following commands to
-install the mandatory and optional dependencies. If your package manager
-isn't included in the list below, please send us the respective command, so
-we add it. Gnuastro itself if also already packaged in some package
-managers (for example Debian or Homebrew).
+Based on your package manager, you can use any of the following commands to 
install the mandatory and optional dependencies.
+If your package manager isn't included in the list below, please send us the 
respective command, so we add it.
+Gnuastro itself if also already packaged in some package managers (for example 
Debian or Homebrew).
 
-As discussed above, we recommend installing the @emph{mandatory}
-dependencies manually from source (see @ref{Mandatory
-dependencies}). Therefore, in each command below, first the optional
-dependencies are given. The mandatory dependencies are included after an
-empty line. If you would also like to install the mandatory dependencies
-with your package manager, just ignore the empty line.
+As discussed above, we recommend installing the @emph{mandatory} dependencies 
manually from source (see @ref{Mandatory dependencies}).
+Therefore, in each command below, first the optional dependencies are given.
+The mandatory dependencies are included after an empty line.
+If you would also like to install the mandatory dependencies with your package 
manager, just ignore the empty line.
 
-For better archivability and compression ratios, Gnuastro's recommended
-tarball compression format is with the
-@url{http://lzip.nongnu.org/lzip.html, Lzip} program, see @ref{Release
-tarball}. Therefore, the package manager commands below also contain Lzip.
+For better archivability and compression ratios, Gnuastro's recommended 
tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html, 
Lzip} program, see @ref{Release tarball}.
+Therefore, the package manager commands below also contain Lzip.
 
 @table @asis
 @item @command{apt-get} (Debian-based OSs: Debian, Ubuntu, Linux Mint, etc)
@@ -5602,14 +4245,12 @@ tarball}. Therefore, the package manager commands below 
also contain Lzip.
 @cindex Advanced Packaging Tool (APT, Debian)
 @url{https://en.wikipedia.org/wiki/Debian,Debian} is one of the oldest
 GNU/Linux
-distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
 It
-thus has a very extended user community and a robust internal structure and
-standards. All of it is free software and based on the work of volunteers
-around the world. Many distributions are thus derived from it, for example
-Ubuntu and Linux Mint. This arguably makes Debian-based OSs the largest,
-and most used, class of GNU/Linux distributions. All of them use Debian's
-Advanced Packaging Tool (APT, for example @command{apt-get}) for managing
-packages.
+distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
+It thus has a very extended user community and a robust internal structure and 
standards.
+All of it is free software and based on the work of volunteers around the 
world.
+Many distributions are thus derived from it, for example Ubuntu and Linux Mint.
+This arguably makes Debian-based OSs the largest, and most used, class of 
GNU/Linux distributions.
+All of them use Debian's Advanced Packaging Tool (APT, for example 
@command{apt-get}) for managing packages.
 @example
 $ sudo apt-get install ghostscript libtool-bin libjpeg-dev  \
                        libtiff-dev libgit2-dev lzip         \
@@ -5618,10 +4259,8 @@ $ sudo apt-get install ghostscript libtool-bin 
libjpeg-dev  \
 @end example
 
 @noindent
-Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in
-Debian (and thus some of its derivate operating systems). Just make sure it
-is the most recent version.
-
+Gnuastro is @url{https://tracker.debian.org/pkg/gnuastro,packaged} in Debian 
(and thus some of its derivate operating systems).
+Just make sure it is the most recent version.
 
 @item @command{dnf}
 @itemx @command{yum} (Red Hat-based OSs: Red Hat, Fedora, CentOS, Scientific 
Linux, etc)
@@ -5632,14 +4271,11 @@ is the most recent version.
 @cindex @command{dnf}
 @cindex @command{yum}
 @cindex Scientific Linux
-@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL)
-is released by Red Hat Inc. RHEL requires paid subscriptions for use of its
-binaries and support. But since it is free software, many other teams use
-its code to spin-off their own distributions based on RHEL. Red Hat-based
-GNU/Linux distributions initially used the ``Yellowdog Updated, Modifier''
-(YUM) package manager, which has been replaced by ``Dandified yum''
-(DNF). If the latter isn't available on your system, you can use
-@command{yum} instead of @command{dnf} in the command below.
+@url{https://en.wikipedia.org/wiki/Red_Hat,Red Hat Enterprise Linux} (RHEL) is 
released by Red Hat Inc.
+RHEL requires paid subscriptions for use of its binaries and support.
+But since it is free software, many other teams use its code to spin-off their 
own distributions based on RHEL.
+Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated, 
Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum'' 
(DNF).
+If the latter isn't available on your system, you can use @command{yum} 
instead of @command{dnf} in the command below.
 @example
 $ sudo dnf install ghostscript libtool libjpeg-devel        \
                    libtiff-devel libgit2-devel lzip         \
@@ -5652,18 +4288,14 @@ $ sudo dnf install ghostscript libtool libjpeg-devel    
    \
 @cindex Homebrew
 @cindex MacPorts
 @cindex @command{brew}
-@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system
-used on Apple devices. macOS does not come with a package manager
-pre-installed, but several widely used, third-party package managers exist,
-such as Homebrew or MacPorts. Both are free software. Currently we have
-only tested Gnuastro's installation with Homebrew as described below.
-
-If not already installed, first obtain Homebrew by following the
-instructions at @url{https://brew.sh}. Homebrew manages packages in
-different `taps'. To install WCSLIB (discussed in @ref{Mandatory
-dependencies}) via Homebrew you will need to @command{tap} into
-@command{brewsci/science} first (the tap may change in the future, but can
-be found by calling @command{brew search wcslib}).
+@url{https://en.wikipedia.org/wiki/MacOS,macOS} is the operating system used 
on Apple devices.
+macOS does not come with a package manager pre-installed, but several widely 
used, third-party package managers exist, such as Homebrew or MacPorts.
+Both are free software.
+Currently we have only tested Gnuastro's installation with Homebrew as 
described below.
+
+If not already installed, first obtain Homebrew by following the instructions 
at @url{https://brew.sh}.
+Homebrew manages packages in different `taps'.
+To install WCSLIB (discussed in @ref{Mandatory dependencies}) via Homebrew you 
will need to @command{tap} into @command{brewsci/science} first (the tap may 
change in the future, but can be found by calling @command{brew search wcslib}).
 @example
 $ brew install ghostscript libtool libjpeg libtiff          \
                libgit2 lzip                                 \
@@ -5676,12 +4308,9 @@ $ brew install wcslib
 @item @command{pacman} (Arch Linux)
 @cindex Arch Linux
 @cindex @command{pacman}
-@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller
-GNU/Linux distribution, which follows the KISS principle (``keep it simple,
-stupid'') as a general guideline. It ``focuses on elegance, code
-correctness, minimalism and simplicity, and expects the user to be willing
-to make some effort to understand the system's operation''. Arch Linux uses
-``Package manager'' (Pacman) to manage its packages/components.
+@url{https://en.wikipedia.org/wiki/Arch_Linux,Arch Linux} is a smaller 
GNU/Linux distribution, which follows the KISS principle (``keep it simple, 
stupid'') as a general guideline.
+It ``focuses on elegance, code correctness, minimalism and simplicity, and 
expects the user to be willing to make some effort to understand the system's 
operation''.
+Arch Linux uses ``Package manager'' (Pacman) to manage its packages/components.
 @example
 $ sudo pacman -S ghostscript libtool libjpeg libtiff        \
                  libgit2 lzip                               \
@@ -5692,15 +4321,10 @@ $ sudo pacman -S ghostscript libtool libjpeg libtiff    
    \
 @item @command{zypper} (openSUSE and SUSE Linux Enterprise Server)
 @cindex openSUSE
 @cindex SUSE Linux Enterprise Server
-@cindex @command{zypper}
-@url{https://www.opensuse.org,openSUSE} is a community project supported by
-@url{https://www.suse.com,SUSE} with both stable and rolling releases.
-SUSE Linux Enterprise
-Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the
-commercial offering which shares code and tools. Many additional packages
-are offered in the Build
-Service@footnote{@url{https://build.opensuse.org}}. openSUSE and SLES use
-@command{zypper} (cli) and YaST (GUI) for managing repositories and
+@cindex @command{zypper} @url{https://www.opensuse.org,openSUSE} is a 
community project supported by @url{https://www.suse.com,SUSE} with both stable 
and rolling releases.
+SUSE Linux Enterprise 
Server@footnote{@url{https://www.suse.com/products/server}} (SLES) is the 
commercial offering which shares code and tools.
+Many additional packages are offered in the Build 
Service@footnote{@url{https://build.opensuse.org}}.
+openSUSE and SLES use @command{zypper} (cli) and YaST (GUI) for managing 
repositories and
 packages.
 
 @example
@@ -5710,8 +4334,7 @@ $ sudo zypper install ghostscript_any libtool pkgconfig   
 \
               wcslib-devel
 @end example
 @noindent
-When building Gnuastro, run the configure script with the following
-@code{CPPFLAGS} environment variable:
+When building Gnuastro, run the configure script with the following 
@code{CPPFLAGS} environment variable:
 
 @example
 $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
@@ -5721,20 +4344,14 @@ $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
 @c in @command{zypper}. Just make sure it is the most recent version.
 @end table
 
-Usually, when libraries are installed by operating system package managers,
-there should be no problems when configuring and building other programs
-from source (that depend on the libraries: Gnuastro in this case). However,
-in some special conditions, problems may pop-up during the configuration,
-building, or checking/running any of Gnuastro's programs. The most common
-of such problems and their solution are discussed below.
+Usually, when libraries are installed by operating system package managers, 
there should be no problems when configuring and building other programs from 
source (that depend on the libraries: Gnuastro in this case).
+However, in some special conditions, problems may pop-up during the 
configuration, building, or checking/running any of Gnuastro's programs.
+The most common of such problems and their solution are discussed below.
 
 @cartouche
 @noindent
-@strong{Not finding library during configuration:} If a library is
-installed, but during Gnuastro's @command{configure} step the library isn't
-found, then configure Gnuastro like the command below (correcting
-@file{/path/to/lib}). For more, see @ref{Known issues} and
-@ref{Installation directory}.
+@strong{Not finding library during configuration:} If a library is installed, 
but during Gnuastro's @command{configure} step the library isn't found, then 
configure Gnuastro like the command below (correcting @file{/path/to/lib}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure LDFLAGS="-L/path/to/lib"
 @end example
@@ -5742,11 +4359,8 @@ $ ./configure LDFLAGS="-L/path/to/lib"
 
 @cartouche
 @noindent
-@strong{Not finding header (.h) files while building:} If a library is
-installed, but during Gnuastro's @command{make} step, the library's header
-(file with a @file{.h} suffix) isn't found, then configure Gnuastro like
-the command below (correcting @file{/path/to/include}). For more, see
-@ref{Known issues} and @ref{Installation directory}.
+@strong{Not finding header (.h) files while building:} If a library is 
installed, but during Gnuastro's @command{make} step, the library's header 
(file with a @file{.h} suffix) isn't found, then configure Gnuastro like the 
command below (correcting @file{/path/to/include}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure CPPFLAGS="-I/path/to/include"
 @end example
@@ -5754,11 +4368,9 @@ $ ./configure CPPFLAGS="-I/path/to/include"
 
 @cartouche
 @noindent
-@strong{Gnuastro's programs don't run during check or after install:} If a
-library is installed, but the programs don't run due to linking problems,
-set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is
-installed in @file{/path/to/installed}). For more, see @ref{Known issues}
-and @ref{Installation directory}.
+@strong{Gnuastro's programs don't run during check or after install:}
+If a library is installed, but the programs don't run due to linking problems, 
set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is 
installed in @file{/path/to/installed}).
+For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
 @end example
@@ -5775,15 +4387,12 @@ $ export 
LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
 @node Downloading the source, Build and install, Dependencies, Installation
 @section Downloading the source
 
-Gnuastro's source code can be downloaded in two ways. As a tarball, ready
-to be configured and installed on your system (as described in @ref{Quick
-start}), see @ref{Release tarball}. If you want official releases of stable
-versions this is the best, easiest and most common option. Alternatively,
-you can clone the version controlled history of Gnuastro, run one extra
-bootstrapping step and then follow the same steps as the tarball. This will
-give you access to all the most recent work that will be included in the
-next release along with the full project history. The process is thoroughly
-introduced in @ref{Version controlled source}.
+Gnuastro's source code can be downloaded in two ways.
+As a tarball, ready to be configured and installed on your system (as 
described in @ref{Quick start}), see @ref{Release tarball}.
+If you want official releases of stable versions this is the best, easiest and 
most common option.
+Alternatively, you can clone the version controlled history of Gnuastro, run 
one extra bootstrapping step and then follow the same steps as the tarball.
+This will give you access to all the most recent work that will be included in 
the next release along with the full project history.
+The process is thoroughly introduced in @ref{Version controlled source}.
 
 
 
@@ -5795,59 +4404,43 @@ introduced in @ref{Version controlled source}.
 @node Release tarball, Version controlled source, Downloading the source, 
Downloading the source
 @subsection Release tarball
 
-A release tarball (commonly compressed) is the most common way of obtaining
-free and open source software. A tarball is a snapshot of one particular
-moment in the Gnuastro development history along with all the necessary
-files to configure, build, and install Gnuastro easily (see @ref{Quick
-start}). It is very straightforward and needs the least set of dependencies
-(see @ref{Mandatory dependencies}). Gnuastro has tarballs for official
-stable releases and pre-releases for testing. See @ref{Version numbering}
-for more on the two types of releases and the formats of the version
-numbers. The URLs for each type of release are given below.
+A release tarball (commonly compressed) is the most common way of obtaining 
free and open source software.
+A tarball is a snapshot of one particular moment in the Gnuastro development 
history along with all the necessary files to configure, build, and install 
Gnuastro easily (see @ref{Quick start}).
+It is very straightforward and needs the least set of dependencies (see 
@ref{Mandatory dependencies}).
+Gnuastro has tarballs for official stable releases and pre-releases for 
testing.
+See @ref{Version numbering} for more on the two types of releases and the 
formats of the version numbers.
+The URLs for each type of release are given below.
 
 @table @asis
 
 @item Official stable releases (@url{http://ftp.gnu.org/gnu/gnuastro}):
-This URL hosts the official stable releases of Gnuastro. Always use the
-most recent version (see @ref{Version numbering}). By clicking on the
-``Last modified'' title of the second column, the files will be sorted by
-their date which you can also use to find the latest version. It is
-recommended to use a mirror to download these tarballs, please visit
-@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
+This URL hosts the official stable releases of Gnuastro.
+Always use the most recent version (see @ref{Version numbering}).
+By clicking on the ``Last modified'' title of the second column, the files 
will be sorted by their date which you can also use to find the latest version.
+It is recommended to use a mirror to download these tarballs, please visit 
@url{http://ftpmirror.gnu.org/gnuastro/} and see below.
 
 @item Pre-release tar-balls (@url{http://alpha.gnu.org/gnu/gnuastro}):
-This URL contains unofficial pre-release versions of Gnuastro. The
-pre-release versions of Gnuastro here are for enthusiasts to try out before
-an official release. If there are problems, or bugs then the testers will
-inform the developers to fix before the next official release. See
-@ref{Version numbering} to understand how the version numbers here are
-formatted. If you want to remain even more up-to-date with the developing
-activities, please clone the version controlled source as described in
-@ref{Version controlled source}.
+This URL contains unofficial pre-release versions of Gnuastro.
+The pre-release versions of Gnuastro here are for enthusiasts to try out 
before an official release.
+If there are problems, or bugs then the testers will inform the developers to 
fix before the next official release.
+See @ref{Version numbering} to understand how the version numbers here are 
formatted.
+If you want to remain even more up-to-date with the developing activities, 
please clone the version controlled source as described in @ref{Version 
controlled source}.
 
 @end table
 
 @cindex Gzip
 @cindex Lzip
-Gnuastro's official/stable tarball is released with two formats: Gzip (with
-suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}). The
-pre-release tarballs (after version 0.3) are released only as an Lzip
-tarball. Gzip is a very well-known and widely used compression program
-created by GNU and available in most systems. However, Lzip provides a
-better compression ratio and more robust archival capacity. For example
-Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip respectively,
-see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage} for
-more. Lzip might not be pre-installed in your operating system, if so,
-installing it from your operating system's package manager or from source
-is very easy and fast (it is a very small program).
-
-The GNU FTP server is mirrored (has backups) in various locations on the
-globe (@url{http://www.gnu.org/order/ftp.html}). You can use the closest
-mirror to your location for a more faster download. Note that only some
-mirrors keep track of the pre-release (alpha) tarballs. Also note that if
-you want to download immediately after and announcement (see
-@ref{Announcements}), the mirrors might need some time to synchronize with
-the main GNU FTP server.
+Gnuastro's official/stable tarball is released with two formats: Gzip (with 
suffix @file{.tar.gz}) and Lzip (with suffix @file{.tar.lz}).
+The pre-release tarballs (after version 0.3) are released only as an Lzip 
tarball.
+Gzip is a very well-known and widely used compression program created by GNU 
and available in most systems.
+However, Lzip provides a better compression ratio and more robust archival 
capacity.
+For example Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip 
respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip webpage} 
for more.
+Lzip might not be pre-installed in your operating system, if so, installing it 
from your operating system's package manager or from source is very easy and 
fast (it is a very small program).
+
+The GNU FTP server is mirrored (has backups) in various locations on the globe 
(@url{http://www.gnu.org/order/ftp.html}).
+You can use the closest mirror to your location for a more faster download.
+Note that only some mirrors keep track of the pre-release (alpha) tarballs.
+Also note that if you want to download immediately after and announcement (see 
@ref{Announcements}), the mirrors might need some time to synchronize with the 
main GNU FTP server.
 
 
 @node Version controlled source,  , Release tarball, Downloading the source
@@ -5855,28 +4448,16 @@ the main GNU FTP server.
 
 @cindex Git
 @cindex Version control
-The publicly distributed Gnuastro tar-ball (for example
-@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is
-only a snapshot of the source code at one significant instant of Gnuastro's
-history (specified by the version number, see @ref{Version numbering}),
-ready to be configured and built. To be able to develop successfully, the
-revision history of the code can be very useful to track when something was
-added or changed, also some updates that are not yet officially released
-might be in it.
-
-We use Git for the version control of Gnuastro. For those who are not
-familiar with it, we recommend the @url{https://git-scm.com/book/en, ProGit
-book}. The whole book is publicly available for online reading and
-downloading and does a wonderful job at explaining the concepts and best
-practices.
-
-Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory
-(can be any directory, change the value below). The full version controlled
-history of Gnuastro can be cloned in @file{TOPGNUASTRO/gnuastro} by running
-the following commands@footnote{If your internet connection is active, but
-Git complains about the network, it might be due to your network setup not
-recognizing the Git protocol. In that case use the following URL which uses
-the HTTP protocol instead: @command{http://git.sv.gnu.org/r/gnuastro.git}}:
+The publicly distributed Gnuastro tar-ball (for example 
@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a 
snapshot of the source code at one significant instant of Gnuastro's history 
(specified by the version number, see @ref{Version numbering}), ready to be 
configured and built.
+To be able to develop successfully, the revision history of the code can be 
very useful to track when something was added or changed, also some updates 
that are not yet officially released might be in it.
+
+We use Git for the version control of Gnuastro.
+For those who are not familiar with it, we recommend the 
@url{https://git-scm.com/book/en, ProGit book}.
+The whole book is publicly available for online reading and downloading and 
does a wonderful job at explaining the concepts and best practices.
+
+Let's assume you want to keep Gnuastro in the @file{TOPGNUASTRO} directory 
(can be any directory, change the value below).
+The full version controlled history of Gnuastro can be cloned in 
@file{TOPGNUASTRO/gnuastro} by running the following commands@footnote{If your 
internet connection is active, but Git complains about the network, it might be 
due to your network setup not recognizing the Git protocol.
+In that case use the following URL which uses the HTTP protocol instead: 
@command{http://git.sv.gnu.org/r/gnuastro.git}}:
 
 @example
 $ TOPGNUASTRO=/home/yourname/Research/projects/
@@ -5885,28 +4466,18 @@ $ git clone git://git.sv.gnu.org/gnuastro.git
 @end example
 
 @noindent
-The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written
-(version controlled) source code for Gnuastro's programs, libraries, this
-book and the tests. All are divided into sub-directories with standard and
-very descriptive names. The version controlled files in the top cloned
-directory are either mainly in capital letters (for example @file{THANKS}
-and @file{README}) or mainly written in small-caps (for example
-@file{configure.ac} and @file{Makefile.am}). The former are
-non-programming, standard writing for human readers containing high-level
-information about the whole package. The latter are instructions to
-customize the GNU build system for Gnuastro. For more on Gnuastro's source
-code structure, please see @ref{Developing}. We won't go any deeper here.
+The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version 
controlled) source code for Gnuastro's programs, libraries, this book and the 
tests.
+All are divided into sub-directories with standard and very descriptive names.
+The version controlled files in the top cloned directory are either mainly in 
capital letters (for example @file{THANKS} and @file{README}) or mainly written 
in small-caps (for example @file{configure.ac} and @file{Makefile.am}).
+The former are non-programming, standard writing for human readers containing 
high-level information about the whole package.
+The latter are instructions to customize the GNU build system for Gnuastro.
+For more on Gnuastro's source code structure, please see @ref{Developing}.
+We won't go any deeper here.
 
-The cloned Gnuastro source cannot immediately be configured, compiled, or
-installed since it only contains hand-written files, not automatically
-generated or imported files which do all the hard work of the build
-process. See @ref{Bootstrapping} for the process of generating and
-importing those files (its not too hard!). Once you have bootstrapped
-Gnuastro, you can run the standard procedures (in @ref{Quick start}). Very
-soon after you have cloned it, Gnuastro's main @file{master} branch will be
-updated on the main repository (since the developers are actively working
-on Gnuastro), for the best practices in keeping your local history in sync
-with the main repository see @ref{Synchronizing}.
+The cloned Gnuastro source cannot immediately be configured, compiled, or 
installed since it only contains hand-written files, not automatically 
generated or imported files which do all the hard work of the build process.
+See @ref{Bootstrapping} for the process of generating and importing those 
files (its not too hard!).
+Once you have bootstrapped Gnuastro, you can run the standard procedures (in 
@ref{Quick start}).
+Very soon after you have cloned it, Gnuastro's main @file{master} branch will 
be updated on the main repository (since the developers are actively working on 
Gnuastro), for the best practices in keeping your local history in sync with 
the main repository see @ref{Synchronizing}.
 
 
 
@@ -5926,63 +4497,37 @@ with the main repository see @ref{Synchronizing}.
 @cindex GNU Portability Library (Gnulib)
 @cindex Automatically created build files
 @noindent
-The version controlled source code lacks the source files that we have not
-written or are automatically built. These automatically generated files are
-included in the distributed tar ball for each distribution (for example
-@file{gnuastro-X.X.tar.gz}, see @ref{Version numbering}) and make it easy
-to immediately configure, build, and install Gnuastro. However from the
-perspective of version control, they are just bloatware and sources of
-confusion (since they are not changed by Gnuastro developers).
-
-The process of automatically building and importing necessary files into
-the cloned directory is known as @emph{bootstrapping}. All the instructions
-for an automatic bootstrapping are available in @file{bootstrap} and
-configured using @file{bootstrap.conf}. @file{bootstrap} and @file{COPYING}
-(which contains the software copyright notice) are the only files not
-written by Gnuastro developers but under version control to enable simple
-bootstrapping and legal information on usage immediately after
-cloning. @file{bootstrap.conf} is maintained by the GNU Portability Library
-(Gnulib) and this file is an identical copy, so do not make any changes in
-this file since it will be replaced when Gnulib releases an update. Make
-all your changes in @file{bootstrap.conf}.
-
-The bootstrapping process has its own separate set of dependencies, the
-full list is given in @ref{Bootstrapping dependencies}. They are generally
-very low-level and used by a very large set of commonly used programs, so
-they are probably already installed on your system. The simplest way to
-bootstrap Gnuastro is to simply run the bootstrap script within your cloned
-Gnuastro directory as shown below. However, please read the next paragraph
-before doing so (see @ref{Version controlled source} for
-@file{TOPGNUASTRO}).
+The version controlled source code lacks the source files that we have not 
written or are automatically built.
+These automatically generated files are included in the distributed tar ball 
for each distribution (for example @file{gnuastro-X.X.tar.gz}, see @ref{Version 
numbering}) and make it easy to immediately configure, build, and install 
Gnuastro.
+However from the perspective of version control, they are just bloatware and 
sources of confusion (since they are not changed by Gnuastro developers).
+
+The process of automatically building and importing necessary files into the 
cloned directory is known as @emph{bootstrapping}.
+All the instructions for an automatic bootstrapping are available in 
@file{bootstrap} and configured using @file{bootstrap.conf}.
+@file{bootstrap} and @file{COPYING} (which contains the software copyright 
notice) are the only files not written by Gnuastro developers but under version 
control to enable simple bootstrapping and legal information on usage 
immediately after cloning.
+@file{bootstrap.conf} is maintained by the GNU Portability Library (Gnulib) 
and this file is an identical copy, so do not make any changes in this file 
since it will be replaced when Gnulib releases an update.
+Make all your changes in @file{bootstrap.conf}.
+
+The bootstrapping process has its own separate set of dependencies, the full 
list is given in @ref{Bootstrapping dependencies}.
+They are generally very low-level and used by a very large set of commonly 
used programs, so they are probably already installed on your system.
+The simplest way to bootstrap Gnuastro is to simply run the bootstrap script 
within your cloned Gnuastro directory as shown below.
+However, please read the next paragraph before doing so (see @ref{Version 
controlled source} for @file{TOPGNUASTRO}).
 
 @example
 $ cd TOPGNUASTRO/gnuastro
 $ ./bootstrap                      # Requires internet connection
 @end example
 
-Without any options, @file{bootstrap} will clone Gnulib within your cloned
-Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the
-necessary Autoconf archives macros. So if you run bootstrap like this, you
-will need an internet connection every time you decide to bootstrap. Also,
-Gnulib is a large package and cloning it can be slow. It will also keep the
-full Gnulib repository within your Gnuastro repository, so if another one
-of your projects also needs Gnulib, and you insist on running bootstrap
-like this, you will have two copies. In case you regularly backup your
-important files, Gnulib will also slow down the backup process. Therefore
-while the simple invocation above can be used with no problem, it is not
-recommended. To do better, see the next paragraph.
-
-The recommended way to get these two packages is thoroughly discussed in
-@ref{Bootstrapping dependencies} (in short: clone them in the separate
-@file{DEVDIR/} directory). The following commands will take you into the
-cloned Gnuastro directory and run the @file{bootstrap} script, while
-telling it to copy some files (instead of making symbolic links, with the
-@option{--copy} option, this is not mandatory@footnote{The @option{--copy}
-option is recommended because some backup systems might do strange things
-with symbolic links.}) and where to look for Gnulib (with the
-@option{--gnulib-srcdir} option). Please note that the address given to
-@option{--gnulib-srcdir} has to be an absolute address (so don't use
-@file{~} or @file{../} for example).
+Without any options, @file{bootstrap} will clone Gnulib within your cloned 
Gnuastro directory (@file{TOPGNUASTRO/gnuastro/gnulib}) and download the 
necessary Autoconf archives macros.
+So if you run bootstrap like this, you will need an internet connection every 
time you decide to bootstrap.
+Also, Gnulib is a large package and cloning it can be slow.
+It will also keep the full Gnulib repository within your Gnuastro repository, 
so if another one of your projects also needs Gnulib, and you insist on running 
bootstrap like this, you will have two copies.
+In case you regularly backup your important files, Gnulib will also slow down 
the backup process.
+Therefore while the simple invocation above can be used with no problem, it is 
not recommended.
+To do better, see the next paragraph.
+
+The recommended way to get these two packages is thoroughly discussed in 
@ref{Bootstrapping dependencies} (in short: clone them in the separate 
@file{DEVDIR/} directory).
+The following commands will take you into the cloned Gnuastro directory and 
run the @file{bootstrap} script, while telling it to copy some files (instead 
of making symbolic links, with the @option{--copy} option, this is not 
mandatory@footnote{The @option{--copy} option is recommended because some 
backup systems might do strange things with symbolic links.}) and where to look 
for Gnulib (with the @option{--gnulib-srcdir} option).
+Please note that the address given to @option{--gnulib-srcdir} has to be an 
absolute address (so don't use @file{~} or @file{../} for example).
 
 @example
 $ cd $TOPGNUASTRO/gnuastro
@@ -5996,68 +4541,49 @@ $ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
 @cindex GNU Automake
 @cindex GNU C library
 @cindex GNU build system
-Since Gnulib and Autoconf archives are now available in your local
-directories, you don't need an internet connection every time you decide to
-remove all untracked files and redo the bootstrap (see box below). You can
-also use the same command on any other project that uses Gnulib. All the
-necessary GNU C library functions, Autoconf macros and Automake inputs are
-now available along with the book figures. The standard GNU build system
-(@ref{Quick start}) will do the rest of the job.
+Since Gnulib and Autoconf archives are now available in your local 
directories, you don't need an internet connection every time you decide to 
remove all untracked files and redo the bootstrap (see box below).
+You can also use the same command on any other project that uses Gnulib.
+All the necessary GNU C library functions, Autoconf macros and Automake inputs 
are now available along with the book figures.
+The standard GNU build system (@ref{Quick start}) will do the rest of the job.
 
 @cartouche
 @noindent
-@strong{Undoing the bootstrap:} During the development, it might happen
-that you want to remove all the automatically generated and imported
-files. In other words, you might want to reverse the bootstrap
-process. Fortunately Git has a good program for this job: @command{git
-clean}. Run the following command and every file that is not version
-controlled will be removed.
+@strong{Undoing the bootstrap:}
+During the development, it might happen that you want to remove all the 
automatically generated and imported files.
+In other words, you might want to reverse the bootstrap process.
+Fortunately Git has a good program for this job: @command{git clean}.
+Run the following command and every file that is not version controlled will 
be removed.
 
 @example
 git clean -fxd
 @end example
 
 @noindent
-It is best to commit any recent change before running this
-command. You might have created new files since the last commit and if
-they haven't been committed, they will all be gone forever (using
-@command{rm}). To get a list of the non-version controlled files
-instead of deleting them, add the @option{n} option to @command{git
-clean}, so it becomes @option{-fxdn}.
+It is best to commit any recent change before running this command.
+You might have created new files since the last commit and if they haven't 
been committed, they will all be gone forever (using @command{rm}).
+To get a list of the non-version controlled files instead of deleting them, 
add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
 @end cartouche
 
-Besides the @file{bootstrap} and @file{bootstrap.conf}, the
-@file{bootstrapped/} directory and @file{README-hacking} file are also
-related to the bootstrapping process. The former hosts all the imported
-(bootstrapped) directories. Thus, in the version controlled source, it only
-contains a @file{REAME} file, but in the distributed tar-ball it also
-contains sub-directories filled with all bootstrapped
-files. @file{README-hacking} contains a summary of the bootstrapping
-process discussed in this section. It is a necessary reference when you
-haven't built this book yet. It is thus not distributed in the Gnuastro
-tarball.
+Besides the @file{bootstrap} and @file{bootstrap.conf}, the 
@file{bootstrapped/} directory and @file{README-hacking} file are also related 
to the bootstrapping process.
+The former hosts all the imported (bootstrapped) directories.
+Thus, in the version controlled source, it only contains a @file{REAME} file, 
but in the distributed tar-ball it also contains sub-directories filled with 
all bootstrapped files.
+@file{README-hacking} contains a summary of the bootstrapping process 
discussed in this section.
+It is a necessary reference when you haven't built this book yet.
+It is thus not distributed in the Gnuastro tarball.
 
 
 @node Synchronizing,  , Bootstrapping, Version controlled source
 @subsubsection Synchronizing
 
-The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed:
-you mainly need it after you have cloned Gnuastro (once) and whenever you
-want to re-import the files from Gnulib, or Autoconf
-archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is
-defined for you to check if significant (for Gnuastro) updates are made in
-these repositories, since the last time you pulled from them.} (not too
-common). However, Gnuastro developers are constantly working on Gnuastro
-and are pushing their changes to the official repository. Therefore, your
-local Gnuastro clone will soon be out-dated. Gnuastro has two mailing lists
-dedicated to its developing activities (see @ref{Developing mailing
-lists}).  Subscribing to them can help you decide when to synchronize with
-the official repository.
-
-To pull all the most recent work in Gnuastro, run the following command
-from the top Gnuastro directory. If you don't already have a built system,
-ignore @command{make distclean}. The separate steps are described in detail
-afterwards.
+The bootstrapping script (see @ref{Bootstrapping}) is not regularly needed: 
you mainly need it after you have cloned Gnuastro (once) and whenever you want 
to re-import the files from Gnulib, or Autoconf 
archives@footnote{@url{https://savannah.gnu.org/task/index.php?13993} is 
defined for you to check if significant (for Gnuastro) updates are made in 
these repositories, since the last time you pulled from them.} (not too common).
+However, Gnuastro developers are constantly working on Gnuastro and are 
pushing their changes to the official repository.
+Therefore, your local Gnuastro clone will soon be out-dated.
+Gnuastro has two mailing lists dedicated to its developing activities (see 
@ref{Developing mailing lists}).
+Subscribing to them can help you decide when to synchronize with the official 
repository.
+
+To pull all the most recent work in Gnuastro, run the following command from 
the top Gnuastro directory.
+If you don't already have a built system, ignore @command{make distclean}.
+The separate steps are described in detail afterwards.
 
 @example
 $ make distclean && git pull && autoreconf -f
@@ -6075,40 +4601,25 @@ $ autoreconf -f
 @cindex GNU Autoconf
 @cindex Mailing list: info-gnuastro
 @cindex @code{info-gnuastro@@gnu.org}
-If Gnuastro was already built in this directory, you don't want some
-outputs from the previous version being mixed with outputs from the newly
-pulled work. Therefore, the first step is to clean/delete all the built
-files with @command{make distclean}. Fortunately the GNU build system
-allows the separation of source and built files (in separate
-directories). This is a great feature to keep your source directory clean
-and you can use it to avoid the cleaning step. Gnuastro comes with a script
-with some useful options for this job. It is useful if you regularly pull
-recent changes, see @ref{Separate build and source directories}.
-
-After the pull, we must re-configure Gnuastro with @command{autoreconf -f}
-(part of GNU Autoconf). It will update the @file{./configure} script and
-all the @file{Makefile.in}@footnote{In the GNU build system,
-@command{./configure} will use the @file{Makefile.in} files to create the
-necessary @file{Makefile} files that are later read by @command{make} to
-build the package.} files based on the hand-written configurations (in
-@file{configure.ac} and the @file{Makefile.am} files). After running
-@command{autoreconf -f}, a warning about @code{TEXI2DVI} might show up, you
-can ignore that.
-
-The most important reason for re-building Gnuastro's build system is to
-generate/update the version number for your updated Gnuastro snapshot. This
-generated version number will include the commit information (see
-@ref{Version numbering}). The version number is included in nearly all
-outputs of Gnuastro's programs, therefore it is vital for reproducing an
-old result.
-
-As a summary, be sure to run `@command{autoreconf -f}' after every change
-in the Git history. This includes synchronization with the main server or
-even a commit you have made yourself.
-
-If you would like to see what has changed since you last synchronized your
-local clone, you can take the following steps instead of the simple command
-above (don't type anything after @code{#}):
+If Gnuastro was already built in this directory, you don't want some outputs 
from the previous version being mixed with outputs from the newly pulled work.
+Therefore, the first step is to clean/delete all the built files with 
@command{make distclean}.
+Fortunately the GNU build system allows the separation of source and built 
files (in separate directories).
+This is a great feature to keep your source directory clean and you can use it 
to avoid the cleaning step.
+Gnuastro comes with a script with some useful options for this job.
+It is useful if you regularly pull recent changes, see @ref{Separate build and 
source directories}.
+
+After the pull, we must re-configure Gnuastro with @command{autoreconf -f} 
(part of GNU Autoconf).
+It will update the @file{./configure} script and all the 
@file{Makefile.in}@footnote{In the GNU build system, @command{./configure} will 
use the @file{Makefile.in} files to create the necessary @file{Makefile} files 
that are later read by @command{make} to build the package.} files based on the 
hand-written configurations (in @file{configure.ac} and the @file{Makefile.am} 
files).
+After running @command{autoreconf -f}, a warning about @code{TEXI2DVI} might 
show up, you can ignore that.
+
+The most important reason for re-building Gnuastro's build system is to 
generate/update the version number for your updated Gnuastro snapshot.
+This generated version number will include the commit information (see 
@ref{Version numbering}).
+The version number is included in nearly all outputs of Gnuastro's programs, 
therefore it is vital for reproducing an old result.
+
+As a summary, be sure to run `@command{autoreconf -f}' after every change in 
the Git history.
+This includes synchronization with the main server or even a commit you have 
made yourself.
+
+If you would like to see what has changed since you last synchronized your 
local clone, you can take the following steps instead of the simple command 
above (don't type anything after @code{#}):
 
 @example
 $ git checkout master             # Confirm if you are on master.
@@ -6119,22 +4630,14 @@ $ autoreconf -f                   # Update the build 
system.
 @end example
 
 @noindent
-By default @command{git log} prints the most recent commit first, add the
-@option{--reverse} option to see the changes chronologically. To see
-exactly what has been changed in the source code along with the commit
-message, add a @option{-p} option to the @command{git log}.
+By default @command{git log} prints the most recent commit first, add the 
@option{--reverse} option to see the changes chronologically.
+To see exactly what has been changed in the source code along with the commit 
message, add a @option{-p} option to the @command{git log}.
 
-If you want to make changes in the code, have a look at @ref{Developing} to
-get started easily. Be sure to commit your changes in a separate branch
-(keep your @code{master} branch to follow the official repository) and
-re-run @command{autoreconf -f} after the commit. If you intend to send your
-work to us, you can safely use your commit since it will be ultimately
-recorded in Gnuastro's official history. If not, please upload your
-separate branch to a public hosting service, for example
-@url{https://gitlab.com, GitLab}, and link to it in your
-report/paper. Alternatively, run @command{make distcheck} and upload the
-output @file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage
-so your results can be considered scientific (reproducible) later.
+If you want to make changes in the code, have a look at @ref{Developing} to 
get started easily.
+Be sure to commit your changes in a separate branch (keep your @code{master} 
branch to follow the official repository) and re-run @command{autoreconf -f} 
after the commit.
+If you intend to send your work to us, you can safely use your commit since it 
will be ultimately recorded in Gnuastro's official history.
+If not, please upload your separate branch to a public hosting service, for 
example @url{https://gitlab.com, GitLab}, and link to it in your report/paper.
+Alternatively, run @command{make distcheck} and upload the output 
@file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible webpage so your 
results can be considered scientific (reproducible) later.
 
 
 
@@ -6151,25 +4654,12 @@ so your results can be considered scientific 
(reproducible) later.
 @node Build and install,  , Downloading the source, Installation
 @section Build and install
 
-This section is basically a longer explanation to the sequence of commands
-given in @ref{Quick start}. If you didn't have any problems during the
-@ref{Quick start} steps, you want to have all the programs of Gnuastro
-installed in your system, you don't want to change the executable names
-during or after installation, you have root access to install the programs
-in the default system wide directory, the Letter paper size of the print
-book is fine for you or as a summary you don't feel like going into the
-details when everything is working, you can safely skip this section.
-
-If you have any of the above problems or you want to understand the details
-for a better control over your build and install, read along. The
-dependencies which you will need prior to configuring, building and
-installing Gnuastro are explained in @ref{Dependencies}. The first three
-steps in @ref{Quick start} need no extra explanation, so we will skip them
-and start with an explanation of Gnuastro specific configuration options
-and a discussion on the installation directory in @ref{Configuring},
-followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and
-@ref{Known issues} which explains the solutions to known problems you might
-encounter in the installation steps and ways you can solve them.
+This section is basically a longer explanation to the sequence of commands 
given in @ref{Quick start}.
+If you didn't have any problems during the @ref{Quick start} steps, you want 
to have all the programs of Gnuastro installed in your system, you don't want 
to change the executable names during or after installation, you have root 
access to install the programs in the default system wide directory, the Letter 
paper size of the print book is fine for you or as a summary you don't feel 
like going into the details when everything is working, you can safely skip 
this section.
+
+If you have any of the above problems or you want to understand the details 
for a better control over your build and install, read along.
+The dependencies which you will need prior to configuring, building and 
installing Gnuastro are explained in @ref{Dependencies}.
+The first three steps in @ref{Quick start} need no extra explanation, so we 
will skip them and start with an explanation of Gnuastro specific configuration 
options and a discussion on the installation directory in @ref{Configuring}, 
followed by some smaller subsections: @ref{Tests}, @ref{A4 print book}, and 
@ref{Known issues} which explains the solutions to known problems you might 
encounter in the installation steps and ways you can solve them.
 
 
 @menu
@@ -6189,21 +4679,16 @@ encounter in the installation steps and ways you can 
solve them.
 
 @pindex ./configure
 @cindex Configuring
-The @command{$ ./configure} step is the most important step in the
-build and install process. All the required packages, libraries,
-headers and environment variables are checked in this step. The
-behaviors of make and make install can also be set through command
-line options to this command.
+The @command{$ ./configure} step is the most important step in the build and 
install process.
+All the required packages, libraries, headers and environment variables are 
checked in this step.
+The behaviors of make and make install can also be set through command line 
options to this command.
 
 @cindex Configure options
 @cindex Customizing installation
 @cindex Installation, customizing
-The configure script accepts various arguments and options which
-enable the final user to highly customize whatever she is
-building. The options to configure are generally very similar to
-normal program options explained in @ref{Arguments and
-options}. Similar to all GNU programs, you can get a full list of the
-options along with a short explanation by running
+The configure script accepts various arguments and options which enable the 
final user to highly customize whatever she is building.
+The options to configure are generally very similar to normal program options 
explained in @ref{Arguments and options}.
+Similar to all GNU programs, you can get a full list of the options along with 
a short explanation by running
 
 @example
 $ ./configure --help
@@ -6211,14 +4696,10 @@ $ ./configure --help
 
 @noindent
 @cindex GNU Autoconf
-A complete explanation is also included in the @file{INSTALL} file. Note
-that this file was written by the authors of GNU Autoconf (which builds the
-@file{configure} script), therefore it is common for all programs which use
-the @command{$ ./configure} script for building and installing, not just
-Gnuastro. Here we only discuss cases where you don't have super-user access
-to the system and if you want to change the executable names. But before
-that, a review of the options to configure that are particular to Gnuastro
-are discussed.
+A complete explanation is also included in the @file{INSTALL} file.
+Note that this file was written by the authors of GNU Autoconf (which builds 
the @file{configure} script), therefore it is common for all programs which use 
the @command{$ ./configure} script for building and installing, not just 
Gnuastro.
+Here we only discuss cases where you don't have super-user access to the 
system and if you want to change the executable names.
+But before that, a review of the options to configure that are particular to 
Gnuastro are discussed.
 
 @menu
 * Gnuastro configure options::  Configure options particular to Gnuastro.
@@ -6232,11 +4713,9 @@ are discussed.
 
 @cindex @command{./configure} options
 @cindex Configure options particular to Gnuastro
-Most of the options to configure (which are to do with building) are
-similar for every program which uses this script. Here the options
-that are particular to Gnuastro are discussed. The next topics explain
-the usage of other configure options which can be applied to any
-program using the GNU build system (through the configure script).
+Most of the options to configure (which are to do with building) are similar 
for every program which uses this script.
+Here the options that are particular to Gnuastro are discussed.
+The next topics explain the usage of other configure options which can be 
applied to any program using the GNU build system (through the configure 
script).
 
 @vtable @option
 
@@ -6244,135 +4723,102 @@ program using the GNU build system (through the 
configure script).
 @cindex Valgrind
 @cindex Debugging
 @cindex GNU Debugger
-Compile/build Gnuastro with debugging information, no optimization and
-without shared libraries.
-
-In order to allow more efficient programs when using Gnuastro (after the
-installation), by default Gnuastro is built with a 3rd level (a very high
-level) optimization and no debugging information. By default, libraries are
-also built for static @emph{and} shared linking (see
-@ref{Linking}). However, when there are crashes or unexpected behavior,
-these three features can hinder the process of localizing the problem. This
-configuration option is identical to manually calling the configuration
-script with @code{CFLAGS="-g -O0" --disable-shared}.
-
-In the (rare) situations where you need to do your debugging on the shared
-libraries, don't use this option. Instead run the configure script by
-explicitly setting @code{CFLAGS} like this:
+Compile/build Gnuastro with debugging information, no optimization and without 
shared libraries.
+
+In order to allow more efficient programs when using Gnuastro (after the 
installation), by default Gnuastro is built with a 3rd level (a very high 
level) optimization and no debugging information.
+By default, libraries are also built for static @emph{and} shared linking (see 
@ref{Linking}).
+However, when there are crashes or unexpected behavior, these three features 
can hinder the process of localizing the problem.
+This configuration option is identical to manually calling the configuration 
script with @code{CFLAGS="-g -O0" --disable-shared}.
+
+In the (rare) situations where you need to do your debugging on the shared 
libraries, don't use this option.
+Instead run the configure script by explicitly setting @code{CFLAGS} like this:
 @example
 $ ./configure CFLAGS="-g -O0"
 @end example
 
 @item --enable-check-with-valgrind
 @cindex Valgrind
-Do the @command{make check} tests through Valgrind. Therefore, if any
-crashes or memory-related issues (segmentation faults in particular) occur
-in the tests, the output of Valgrind will also be put in the
-@file{tests/test-suite.log} file without having to manually modify the
-check scripts. This option will also activate Gnuastro's debug mode (see
-the @option{--enable-debug} configure-time option described above).
-
-Valgrind is free software. It is a program for easy checking of
-memory-related issues in programs. It runs a program within its own
-controlled environment and can thus identify the exact line-number in the
-program's source where a memory-related issue occurs. However, it can
-significantly slow-down the tests. So this option is only useful when a
-segmentation fault is found during @command{make check}.
+Do the @command{make check} tests through Valgrind.
+Therefore, if any crashes or memory-related issues (segmentation faults in 
particular) occur in the tests, the output of Valgrind will also be put in the 
@file{tests/test-suite.log} file without having to manually modify the check 
scripts.
+This option will also activate Gnuastro's debug mode (see the 
@option{--enable-debug} configure-time option described above).
+
+Valgrind is free software.
+It is a program for easy checking of memory-related issues in programs.
+It runs a program within its own controlled environment and can thus identify 
the exact line-number in the program's source where a memory-related issue 
occurs.
+However, it can significantly slow-down the tests.
+So this option is only useful when a segmentation fault is found during 
@command{make check}.
 
 @item --enable-progname
-Only build and install @file{progname} along with any other program that is
-enabled in this fashion. @file{progname} is the name of the executable
-without the @file{ast}, for example @file{crop} for Crop (with the
-executable name of @file{astcrop}).
+Only build and install @file{progname} along with any other program that is 
enabled in this fashion.
+@file{progname} is the name of the executable without the @file{ast}, for 
example @file{crop} for Crop (with the executable name of @file{astcrop}).
 
-Note that by default all the programs will be installed. This option (and
-the @option{--disable-progname} options) are only relevant when you don't
-want to install all the programs. Therefore, if this option is called for
-any of the programs in Gnuastro, any program which is not explicitly
-enabled will not be built or installed.
+Note that by default all the programs will be installed.
+This option (and the @option{--disable-progname} options) are only relevant 
when you don't want to install all the programs.
+Therefore, if this option is called for any of the programs in Gnuastro, any 
program which is not explicitly enabled will not be built or installed.
 
 @item --disable-progname
 @itemx --enable-progname=no
-Do not build or install the program named @file{progname}. This is
-very similar to the @option{--enable-progname}, but will build and
-install all the other programs except this one.
+Do not build or install the program named @file{progname}.
+This is very similar to the @option{--enable-progname}, but will build and 
install all the other programs except this one.
 
 @cartouche
 @noindent
-@strong{Note:} If some programs are enabled and some are disabled, it
-is equivalent to simply enabling those that were enabled. Listing the
-disabled programs is redundant.
+@strong{Note:} If some programs are enabled and some are disabled, it is 
equivalent to simply enabling those that were enabled.
+Listing the disabled programs is redundant.
 @end cartouche
 
 @item --enable-gnulibcheck
 @cindex GNU C library
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-Enable checks on the GNU Portability Library (Gnulib). Gnulib is used
-by Gnuastro to enable users of non-GNU based operating systems (that
-don't use GNU C library or glibc) to compile and use the advanced
-features that this library provides. We make extensive use of such
-functions. If you give this option to @command{$ ./configure}, when
-you run @command{$ make check}, first the functions in Gnulib will be
-tested, then the Gnuastro executables. If your operating system does
-not support glibc or has an older version of it and you have problems
-in the build process (@command{$ make}), you can give this flag to
-configure to see if the problem is caused by Gnulib not supporting
-your operating system or Gnuastro, see @ref{Known issues}.
+Enable checks on the GNU Portability Library (Gnulib).
+Gnulib is used by Gnuastro to enable users of non-GNU based operating systems 
(that don't use GNU C library or glibc) to compile and use the advanced 
features that this library provides.
+We make extensive use of such functions.
+If you give this option to @command{$ ./configure}, when you run @command{$ 
make check}, first the functions in Gnulib will be tested, then the Gnuastro 
executables.
+If your operating system does not support glibc or has an older version of it 
and you have problems in the build process (@command{$ make}), you can give 
this flag to configure to see if the problem is caused by Gnulib not supporting 
your operating system or Gnuastro, see @ref{Known issues}.
 
 @item --disable-guide-message
 @itemx --enable-guide-message=no
-Do not print a guiding message during the GNU Build process of @ref{Quick
-start}. By default, after each step, a message is printed guiding the user
-what the next command should be. Therefore, after @command{./configure}, it
-will suggest running @command{make}. After @command{make}, it will suggest
-running @command{make check} and so on. If Gnuastro is configured with this
-option, for example
+Do not print a guiding message during the GNU Build process of @ref{Quick 
start}.
+By default, after each step, a message is printed guiding the user what the 
next command should be.
+Therefore, after @command{./configure}, it will suggest running @command{make}.
+After @command{make}, it will suggest running @command{make check} and so on.
+If Gnuastro is configured with this option, for example
 @example
 $ ./configure --disable-guide-message
 @end example
-Then these messages will not be printed after any step (like most
-programs). For people who are not yet fully accustomed to this build
-system, these guidelines can be very useful and encouraging. However, if
-you find those messages annoying, use this option.
+Then these messages will not be printed after any step (like most programs).
+For people who are not yet fully accustomed to this build system, these 
guidelines can be very useful and encouraging.
+However, if you find those messages annoying, use this option.
 
 @item --without-libgit2
 @cindex Git
 @pindex libgit2
 @cindex Version control systems
-Build Gnuastro without libgit2 (for including Git commit hashes in output
-files), see @ref{Optional dependencies}. libgit2 is an optional dependency,
-with this option, Gnuastro will ignore any possibly existing libgit2 that
-may already be on the system.
+Build Gnuastro without libgit2 (for including Git commit hashes in output 
files), see @ref{Optional dependencies}.
+libgit2 is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libgit2 that may already be on the system.
 
 @item --without-libjpeg
 @pindex libjpeg
 @cindex JPEG format
-Build Gnuastro without libjpeg (for reading/writing to JPEG files), see
-@ref{Optional dependencies}. libjpeg is an optional dependency, with this
-option, Gnuastro will ignore any possibly existing libjpeg that may already
-be on the system.
+Build Gnuastro without libjpeg (for reading/writing to JPEG files), see 
@ref{Optional dependencies}.
+libjpeg is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libjpeg that may already be on the system.
 
 @item --without-libtiff
 @pindex libtiff
 @cindex TIFF format
-Build Gnuastro without libtiff (for reading/writing to TIFF files), see
-@ref{Optional dependencies}. libtiff is an optional dependency, with this
-option, Gnuastro will ignore any possibly existing libtiff that may already
-be on the system.
+Build Gnuastro without libtiff (for reading/writing to TIFF files), see 
@ref{Optional dependencies}.
+libtiff is an optional dependency, with this option, Gnuastro will ignore any 
possibly existing libtiff that may already be on the system.
 
 @end vtable
 
-The tests of some programs might depend on the outputs of the tests of
-other programs. For example MakeProfiles is one the first programs to be
-tested when you run @command{$ make check}. MakeProfiles' test outputs
-(FITS images) are inputs to many other programs (which in turn provide
-inputs for other programs). Therefore, if you don't install MakeProfiles
-for example, the tests for many the other programs will be skipped. To
-avoid this, in one run, you can install all the programs and run the tests
-but not install. If everything is working correctly, you can run configure
-again with only the programs you want. However, don't run the tests and
-directly install after building.
+The tests of some programs might depend on the outputs of the tests of other 
programs.
+For example MakeProfiles is one the first programs to be tested when you run 
@command{$ make check}.
+MakeProfiles' test outputs (FITS images) are inputs to many other programs 
(which in turn provide inputs for other programs).
+Therefore, if you don't install MakeProfiles for example, the tests for many 
the other programs will be skipped.
+To avoid this, in one run, you can install all the programs and run the tests 
but not install.
+If everything is working correctly, you can run configure again with only the 
programs you want.
+However, don't run the tests and directly install after building.
 
 
 
@@ -6384,35 +4830,18 @@ directly install after building.
 @cindex Root access, not possible
 @cindex No access to super-user install
 @cindex Install with no super-user access
-One of the most commonly used options to @file{./configure} is
-@option{--prefix}, it is used to define the directory that will host all
-the installed files (or the ``prefix'' in their final absolute file
-name). For example, when you are using a server and you don't have
-administrator or root access. In this example scenario, if you don't use
-the @option{--prefix} option, you won't be able to install the built files
-and thus access them from anywhere without having to worry about where they
-are installed. However, once you prepare your startup file to look into the
-proper place (as discussed thoroughly below), you will be able to easily
-use this option and benefit from any software you want to install without
-having to ask the system administrators or install and use a different
-version of a software that is already installed on the server.
-
-The most basic way to run an executable is to explicitly write its full
-file name (including all the directory information) and run it. One example
-is running the configuration script with the @command{$ ./configure}
-command (see @ref{Quick start}). By giving a specific directory (the
-current directory or @file{./}), we are explicitly telling the shell to
-look in the current directory for an executable file named
-`@file{configure}'. Directly specifying the directory is thus useful for
-executables in the current (or nearby) directories. However, when the
-program (an executable file) is to be used a lot, specifying all those
-directories will become a significant burden. For example, the @file{ls}
-executable lists the contents in a given directory and it is (usually)
-installed in the @file{/usr/bin/} directory by the operating system
-maintainers. Therefore, if using the full address was the only way to
-access an executable, each time you wanted a listing of a directory, you
-would have to run the following command (which is very inconvenient, both
-in writing and in remembering the various directories).
+One of the most commonly used options to @file{./configure} is 
@option{--prefix}, it is used to define the directory that will host all the 
installed files (or the ``prefix'' in their final absolute file name).
+For example, when you are using a server and you don't have administrator or 
root access.
+In this example scenario, if you don't use the @option{--prefix} option, you 
won't be able to install the built files and thus access them from anywhere 
without having to worry about where they are installed.
+However, once you prepare your startup file to look into the proper place (as 
discussed thoroughly below), you will be able to easily use this option and 
benefit from any software you want to install without having to ask the system 
administrators or install and use a different version of a software that is 
already installed on the server.
+
+The most basic way to run an executable is to explicitly write its full file 
name (including all the directory information) and run it.
+One example is running the configuration script with the @command{$ 
./configure} command (see @ref{Quick start}).
+By giving a specific directory (the current directory or @file{./}), we are 
explicitly telling the shell to look in the current directory for an executable 
file named `@file{configure}'.
+Directly specifying the directory is thus useful for executables in the 
current (or nearby) directories.
+However, when the program (an executable file) is to be used a lot, specifying 
all those directories will become a significant burden.
+For example, the @file{ls} executable lists the contents in a given directory 
and it is (usually) installed in the @file{/usr/bin/} directory by the 
operating system maintainers.
+Therefore, if using the full address was the only way to access an executable, 
each time you wanted a listing of a directory, you would have to run the 
following command (which is very inconvenient, both in writing and in 
remembering the various directories).
 
 @example
 $ /usr/bin/ls
@@ -6420,21 +4849,18 @@ $ /usr/bin/ls
 
 @cindex Shell variables
 @cindex Environment variables
-To address this problem, we have the @file{PATH} environment variable. To
-understand it better, we will start with a short introduction to the shell
-variables. Shell variable values are basically treated as strings of
-characters. For example, it doesn't matter if the value is a name (string
-of @emph{alphabetic} characters), or a number (string of @emph{numeric}
-characters), or both. You can define a variable and a value for it by
-running
+To address this problem, we have the @file{PATH} environment variable.
+To understand it better, we will start with a short introduction to the shell 
variables.
+Shell variable values are basically treated as strings of characters.
+For example, it doesn't matter if the value is a name (string of 
@emph{alphabetic} characters), or a number (string of @emph{numeric} 
characters), or both.
+You can define a variable and a value for it by running
 @example
 $ myvariable1=a_test_value
 $ myvariable2="a test value"
 @end example
 @noindent
-As you see above, if the value contains white space characters, you have to
-put the whole value (including white space characters) in double quotes
-(@key{"}). You can see the value it represents by running
+As you see above, if the value contains white space characters, you have to 
put the whole value (including white space characters) in double quotes 
(@key{"}).
+You can see the value it represents by running
 @example
 $ echo $myvariable1
 $ echo $myvariable2
@@ -6442,19 +4868,13 @@ $ echo $myvariable2
 @noindent
 @cindex Environment
 @cindex Environment variables
-If a variable has no value or it wasn't defined, the last command will only
-print an empty line. A variable defined like this will be known as long as
-this shell or terminal is running. Other terminals will have no idea it
-existed.  The main advantage of shell variables is that if they are
-exported@footnote{By running @command{$ export myvariable=a_test_value}
-instead of the simpler case in the text}, subsequent programs that are run
-within that shell can access their value. So by changing their value, you
-can change the ``environment'' of a program which uses them. The shell
-variables which are accessed by programs are therefore known as
-``environment variables''@footnote{You can use shell variables for other
-actions too, for example to temporarily keep some names or run loops on
-some files.}. You can see the full list of exported variables that your
-shell recognizes by running:
+If a variable has no value or it wasn't defined, the last command will only 
print an empty line.
+A variable defined like this will be known as long as this shell or terminal 
is running.
+Other terminals will have no idea it existed.
+The main advantage of shell variables is that if they are exported@footnote{By 
running @command{$ export myvariable=a_test_value} instead of the simpler case 
in the text}, subsequent programs that are run within that shell can access 
their value.
+So by changing their value, you can change the ``environment'' of a program 
which uses them.
+The shell variables which are accessed by programs are therefore known as 
``environment variables''@footnote{You can use shell variables for other 
actions too, for example to temporarily keep some names or run loops on some 
files.}.
+You can see the full list of exported variables that your shell recognizes by 
running:
 
 @example
 $ printenv
@@ -6463,58 +4883,36 @@ $ printenv
 @cindex @file{HOME}
 @cindex @file{HOME/.local/}
 @cindex Environment variable, @code{HOME}
-@file{HOME} is one commonly used environment variable, it is any user's
-(the one that is logged in) top directory. Try finding it in the command
-above. It is used so often that the shell has a special expansion
-(alternative) for it: `@file{~}'. Whenever you see file names starting with
-the tilde sign, it actually represents the value to the @file{HOME}
-environment variable, so @file{~/doc} is the same as @file{$HOME/doc}.
+@file{HOME} is one commonly used environment variable, it is any user's (the 
one that is logged in) top directory.
+Try finding it in the command above.
+It is used so often that the shell has a special expansion (alternative) for 
it: `@file{~}'.
+Whenever you see file names starting with the tilde sign, it actually 
represents the value to the @file{HOME} environment variable, so @file{~/doc} 
is the same as @file{$HOME/doc}.
 
 @vindex PATH
 @pindex ./configure
 @cindex Setting @code{PATH}
 @cindex Default executable search directory
 @cindex Search directory for executables
-Another one of the most commonly used environment variables is @file{PATH},
-it is a list of directories to search for executable names.  Its value is a
-list of directories (separated by a colon, or `@key{:}'). When the address
-of the executable is not explicitly given (like @file{./configure} above),
-the system will look for the executable in the directories specified by
-@file{PATH}. If you have a computer nearby, try running the following
-command to see which directories your system will look into when it is
-searching for executable (binary) files, one example is printed here
-(notice how @file{/usr/bin}, in the @file{ls} example above, is one of the
-directories in @command{PATH}):
+Another one of the most commonly used environment variables is @file{PATH}, it 
is a list of directories to search for executable names.
+Its value is a list of directories (separated by a colon, or `@key{:}').
+When the address of the executable is not explicitly given (like 
@file{./configure} above), the system will look for the executable in the 
directories specified by @file{PATH}.
+If you have a computer nearby, try running the following command to see which 
directories your system will look into when it is searching for executable 
(binary) files, one example is printed here (notice how @file{/usr/bin}, in the 
@file{ls} example above, is one of the directories in @command{PATH}):
 
 @example
 $ echo $PATH
 /usr/local/sbin:/usr/local/bin:/usr/bin
 @end example
 
-By default @file{PATH} usually contains system-wide directories, which are
-readable (but not writable) by all users, like the above example. Therefore
-if you don't have root (or administrator) access, you need to add another
-directory to @file{PATH} which you actually have write access to. The
-standard directory where you can keep installed files (not just
-executables) for your own user is the @file{~/.local/} directory. The names
-of hidden files start with a `@key{.}' (dot), so it will not show up in
-your common command-line listings, or on the graphical user interface. You
-can use any other directory, but this is the most recognized.
-
-The top installation directory will be used to keep all the package's
-components: programs (executables), libraries, include (header) files,
-shared data (like manuals), or configuration files (see @ref{Review of
-library fundamentals} for a thorough introduction to headers and
-linking). So it commonly has some of the following sub-directories for each
-class of installed components respectively: @file{bin/}, @file{lib/},
-@file{include/} @file{man/}, @file{share/}, @file{etc/}. Since the
-@file{PATH} variable is only used for executables, you can add the
-@file{~/.local/bin} directory (which keeps the executables/programs or more
-generally, ``binary'' files) to @file{PATH} with the following command. As
-defined below, first the existing value of @file{PATH} is used, then your
-given directory is added to its end and the combined value is put back in
-@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was
-added).
+By default @file{PATH} usually contains system-wide directories, which are 
readable (but not writable) by all users, like the above example.
+Therefore if you don't have root (or administrator) access, you need to add 
another directory to @file{PATH} which you actually have write access to.
+The standard directory where you can keep installed files (not just 
executables) for your own user is the @file{~/.local/} directory.
+The names of hidden files start with a `@key{.}' (dot), so it will not show up 
in your common command-line listings, or on the graphical user interface.
+You can use any other directory, but this is the most recognized.
+
+The top installation directory will be used to keep all the package's 
components: programs (executables), libraries, include (header) files, shared 
data (like manuals), or configuration files (see @ref{Review of library 
fundamentals} for a thorough introduction to headers and linking).
+So it commonly has some of the following sub-directories for each class of 
installed components respectively: @file{bin/}, @file{lib/}, @file{include/} 
@file{man/}, @file{share/}, @file{etc/}.
+Since the @file{PATH} variable is only used for executables, you can add the 
@file{~/.local/bin} directory (which keeps the executables/programs or more 
generally, ``binary'' files) to @file{PATH} with the following command.
+As defined below, first the existing value of @file{PATH} is used, then your 
given directory is added to its end and the combined value is put back in 
@file{PATH} (run `@command{$ echo $PATH}' afterwards to check if it was added).
 
 @example
 $ PATH=$PATH:~/.local/bin
@@ -6523,44 +4921,32 @@ $ PATH=$PATH:~/.local/bin
 @cindex GNU Bash
 @cindex Startup scripts
 @cindex Scripts, startup
-Any executable that you installed in @file{~/.local/bin} will now be usable
-without having to remember and write its full address. However, as soon as
-you leave/close your current terminal session, this modified @file{PATH}
-variable will be forgotten. Adding the directories which contain
-executables to the @file{PATH} environment variable each time you start a
-terminal is also very inconvenient and prone to errors. Fortunately, there
-are standard `startup files' defined by your shell precisely for this (and
-other) purposes. There is a special startup file for every significant
-starting step:
+Any executable that you installed in @file{~/.local/bin} will now be usable 
without having to remember and write its full address.
+However, as soon as you leave/close your current terminal session, this 
modified @file{PATH} variable will be forgotten.
+Adding the directories which contain executables to the @file{PATH} 
environment variable each time you start a terminal is also very inconvenient 
and prone to errors.
+Fortunately, there are standard `startup files' defined by your shell 
precisely for this (and other) purposes.
+There is a special startup file for every significant starting step:
 
 @table @asis
 
 @cindex GNU Bash
 @item @file{/etc/profile} and everything in @file{/etc/profile.d/}
-These startup scripts are called when your whole system starts (for example
-after you turn on your computer). Therefore you need administrator or root
-privileges to access or modify them.
+These startup scripts are called when your whole system starts (for example 
after you turn on your computer).
+Therefore you need administrator or root privileges to access or modify them.
 
 @item @file{~/.bash_profile}
-If you are using (GNU) Bash as your shell, the commands in this file are
-run, when you log in to your account @emph{through Bash}. Most commonly
-when you login through the virtual console (where there is no graphic user
-interface).
+If you are using (GNU) Bash as your shell, the commands in this file are run, 
when you log in to your account @emph{through Bash}.
+Most commonly when you login through the virtual console (where there is no 
graphic user interface).
 
 @item @file{~/.bashrc}
-If you are using (GNU) Bash as your shell, the commands here will be run
-each time you start a terminal and are already logged in. For example, when
-you open your terminal emulator in the graphic user interface.
+If you are using (GNU) Bash as your shell, the commands here will be run each 
time you start a terminal and are already logged in.
+For example, when you open your terminal emulator in the graphic user 
interface.
 
 @end table
 
-For security reasons, it is highly recommended to directly type in your
-@file{HOME} directory value by hand in startup files instead of using
-variables. So in the following, let's assume your user name is
-`@file{name}' (so @file{~} may be replaced with @file{/home/name}). To add
-@file{~/.local/bin} to your @file{PATH} automatically on any startup file,
-you have to ``export'' the new value of @command{PATH} in the startup file
-that is most relevant to you by adding this line:
+For security reasons, it is highly recommended to directly type in your 
@file{HOME} directory value by hand in startup files instead of using variables.
+So in the following, let's assume your user name is `@file{name}' (so @file{~} 
may be replaced with @file{/home/name}).
+To add @file{~/.local/bin} to your @file{PATH} automatically on any startup 
file, you have to ``export'' the new value of @command{PATH} in the startup 
file that is most relevant to you by adding this line:
 
 @example
 export PATH=$PATH:/home/name/.local/bin
@@ -6569,20 +4955,9 @@ export PATH=$PATH:/home/name/.local/bin
 @cindex GNU build system
 @cindex Install directory
 @cindex Directory, install
-Now that you know your system will look into @file{~/.local/bin} for
-executables, you can tell Gnuastro's configure script to install everything
-in the top @file{~/.local} directory using the @option{--prefix}
-option. When you subsequently run @command{$ make install}, all the
-install-able files will be put in their respective directory under
-@file{~/.local/} (the executables in @file{~/.local/bin}, the compiled
-library files in @file{~/.local/lib}, the library header files in
-@file{~/.local/include} and so on, to learn more about these different
-files, please see @ref{Review of library fundamentals}). Note that tilde
-(`@key{~}') expansion will not happen if you put a `@key{=}' between
-@option{--prefix} and @file{~/.local}@footnote{If you insist on using
-`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided
-the @key{=} character here which is optional in GNU-style options, see
-@ref{Options}.
+Now that you know your system will look into @file{~/.local/bin} for 
executables, you can tell Gnuastro's configure script to install everything in 
the top @file{~/.local} directory using the @option{--prefix} option.
+When you subsequently run @command{$ make install}, all the install-able files 
will be put in their respective directory under @file{~/.local/} (the 
executables in @file{~/.local/bin}, the compiled library files in 
@file{~/.local/lib}, the library header files in @file{~/.local/include} and so 
on, to learn more about these different files, please see @ref{Review of 
library fundamentals}).
+Note that tilde (`@key{~}') expansion will not happen if you put a `@key{=}' 
between @option{--prefix} and @file{~/.local}@footnote{If you insist on using 
`@key{=}', you can use @option{--prefix=$HOME/.local}.}, so we have avoided the 
@key{=} character here which is optional in GNU-style options, see 
@ref{Options}.
 
 @example
 $ ./configure --prefix ~/.local
@@ -6593,17 +4968,12 @@ $ ./configure --prefix ~/.local
 @cindex @file{LD_LIBRARY_PATH}
 @cindex Library search directory
 @cindex Default library search directory
-You can install everything (including libraries like GSL, CFITSIO, or
-WCSLIB which are Gnuastro's mandatory dependencies, see @ref{Mandatory
-dependencies}) locally by configuring them as above. However, recall that
-@command{PATH} is only for executable files, not libraries and that
-libraries can also depend on other libraries. For example WCSLIB depends on
-CFITSIO and Gnuastro needs both. Therefore, when you installed a library in
-a non-recognized directory, you have to guide the program that depends on
-them to look into the necessary library and header file directories. To do
-that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS}
-environment variables respectively. This can be done while calling
-@file{./configure} as shown below:
+You can install everything (including libraries like GSL, CFITSIO, or WCSLIB 
which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies}) 
locally by configuring them as above.
+However, recall that @command{PATH} is only for executable files, not 
libraries and that libraries can also depend on other libraries.
+For example WCSLIB depends on CFITSIO and Gnuastro needs both.
+Therefore, when you installed a library in a non-recognized directory, you 
have to guide the program that depends on them to look into the necessary 
library and header file directories.
+To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS} 
environment variables respectively.
+This can be done while calling @file{./configure} as shown below:
 
 @example
 $ ./configure LDFLAGS=-L/home/name/.local/lib            \
@@ -6611,19 +4981,10 @@ $ ./configure LDFLAGS=-L/home/name/.local/lib           
 \
               --prefix ~/.local
 @end example
 
-It can be annoying/buggy to do this when configuring every software that
-depends on such libraries. Hence, you can define these two variables in the
-most relevant startup file (discussed above). The convention on using these
-variables doesn't include a colon to separate values (as
-@command{PATH}-like variables do), they use white space characters and each
-value is prefixed with a compiler option@footnote{These variables are
-ultimately used as options while building the programs, so every value has
-be an option name followed be a value as discussed in @ref{Options}.}: note
-the @option{-L} and @option{-I} above (see @ref{Options}), for @option{-I}
-see @ref{Headers}, and for @option{-L}, see @ref{Linking}. Therefore we
-have to keep the value in double quotation signs to keep the white space
-characters and adding the following two lines to the startup file of
-choice:
+It can be annoying/buggy to do this when configuring every software that 
depends on such libraries.
+Hence, you can define these two variables in the most relevant startup file 
(discussed above).
+The convention on using these variables doesn't include a colon to separate 
values (as @command{PATH}-like variables do), they use white space characters 
and each value is prefixed with a compiler option@footnote{These variables are 
ultimately used as options while building the programs, so every value has be 
an option name followed be a value as discussed in @ref{Options}.}: note the 
@option{-L} and @option{-I} above (see @ref{Options}), for @option{-I} see 
@ref{Headers}, and for @optio [...]
+Therefore we have to keep the value in double quotation signs to keep the 
white space characters and adding the following two lines to the startup file 
of choice:
 
 @example
 export LDFLAGS="$LDFLAGS -L/home/name/.local/lib"
@@ -6631,75 +4992,40 @@ export CPPFLAGS="$CPPFLAGS -I/home/name/.local/include"
 @end example
 
 @cindex Dynamic libraries
-Dynamic libraries are linked to the executable every time you run a program
-that depends on them (see @ref{Linking} to fully understand this important
-concept). Hence dynamic libraries also require a special path variable
-called @command{LD_LIBRARY_PATH} (same formatting as @command{PATH}). To
-use programs that depend on these libraries, you need to add
-@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable
-by adding the following line to the relevant start-up file:
+Dynamic libraries are linked to the executable every time you run a program 
that depends on them (see @ref{Linking} to fully understand this important 
concept).
+Hence dynamic libraries also require a special path variable called 
@command{LD_LIBRARY_PATH} (same formatting as @command{PATH}).
+To use programs that depend on these libraries, you need to add 
@file{~/.local/lib} to your @command{LD_LIBRARY_PATH} environment variable by 
adding the following line to the relevant start-up file:
 
 @example
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
 @end example
 
-If you also want to access the Info (see @ref{Info}) and man pages (see
-@ref{Man pages}) documentations add @file{~/.local/share/info} and
-@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the
-following convention: ``If the value of @command{INFOPATH} ends with a
-colon [or it isn't defined] ..., the initial list of directories is
-constructed by appending the build-time default to the value of
-@command{INFOPATH}.'' So when installing in a non-standard directory and if
-@command{INFOPATH} was not initially defined, add a colon to the end of
-@command{INFOPATH} as shown below, otherwise Info will not be able to find
-system-wide installed documentation:@*@command{echo 'export
-INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> ~/.bashrc}@* Note that
-this is only an internal convention of Info, do not use it for other
-@command{*PATH} variables.} and @command{MANPATH} environment variables
-respectively.
+If you also want to access the Info (see @ref{Info}) and man pages (see 
@ref{Man pages}) documentations add @file{~/.local/share/info} and 
@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the 
following convention: ``If the value of @command{INFOPATH} ends with a colon 
[or it isn't defined] ..., the initial list of directories is constructed by 
appending the build-time default to the value of @command{INFOPATH}.'' So when 
installing in a non-standard directory and if [...]
 
 @cindex Search directory order
 @cindex Order in search directory
-A final note is that order matters in the directories that are searched for
-all the variables discussed above. In the examples above, the new directory
-was added after the system specified directories. So if the program,
-library or manuals are found in the system wide directories, the user
-directory is no longer searched. If you want to search your local
-installation first, put the new directory before the already existing list,
-like the example below.
+A final note is that order matters in the directories that are searched for 
all the variables discussed above.
+In the examples above, the new directory was added after the system specified 
directories.
+So if the program, library or manuals are found in the system wide 
directories, the user directory is no longer searched.
+If you want to search your local installation first, put the new directory 
before the already existing list, like the example below.
 
 @example
 export LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
 @end example
 
 @noindent
-This is good when a library, for example CFITSIO, is already present on the
-system, but the system-wide install wasn't configured with the correct
-configuration flags (see @ref{CFITSIO}), or you want to use a newer version
-and you don't have administrator or root access to update it on the whole
-system/server. If you update @file{LD_LIBRARY_PATH} by placing
-@file{~/.local/lib} first (like above), the linker will first find the
-CFITSIO you installed for yourself and link with it. It thus will never
-reach the system-wide installation.
+This is good when a library, for example CFITSIO, is already present on the 
system, but the system-wide install wasn't configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you don't have administrator or root access to update it on the whole 
system/server.
+If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first 
(like above), the linker will first find the CFITSIO you installed for yourself 
and link with it.
+It thus will never reach the system-wide installation.
 
-There are important security problems with using local installations first:
-all important system-wide executables and libraries (important executables
-like @command{ls} and @command{cp}, or libraries like the C library) can be
-replaced by non-secure versions with the same file names and put in the
-customized directory (@file{~/.local} in this example). So if you choose to
-search in your customized directory first, please @emph{be sure} to keep it
-clean from executables or libraries with the same names as important system
-programs or libraries.
+There are important security problems with using local installations first: 
all important system-wide executables and libraries (important executables like 
@command{ls} and @command{cp}, or libraries like the C library) can be replaced 
by non-secure versions with the same file names and put in the customized 
directory (@file{~/.local} in this example).
+So if you choose to search in your customized directory first, please @emph{be 
sure} to keep it clean from executables or libraries with the same names as 
important system programs or libraries.
 
 @cartouche
 @noindent
-@strong{Summary:} When you are using a server which doesn't give you
-administrator/root access AND you would like to give priority to your own
-built programs and libraries, not the version that is (possibly already)
-present on the server, add these lines to your startup file. See above for
-which startup file is best for your case and for a detailed explanation on
-each. Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home
-directory (for example `@file{/home/your-id}'):
+@strong{Summary:} When you are using a server which doesn't give you 
administrator/root access AND you would like to give priority to your own built 
programs and libraries, not the version that is (possibly already) present on 
the server, add these lines to your startup file.
+See above for which startup file is best for your case and for a detailed 
explanation on each.
+Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example `@file{/home/your-id}'):
 
 @example
 export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
@@ -6711,10 +5037,8 @@ export 
LD_LIBRARY_PATH="/YOUR-HOME-DIR/.local/lib:$LD_LIBRARY_PATH"
 @end example
 
 @noindent
-Afterwards, you just need to add an extra
-@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command
-of the software that you intend to install. Everything else will be the
-same as a standard build and install, see @ref{Quick start}.
+Afterwards, you just need to add an extra 
@option{--prefix=/YOUR-HOME-DIR/.local} to the @file{./configure} command of 
the software that you intend to install.
+Everything else will be the same as a standard build and install, see 
@ref{Quick start}.
 @end cartouche
 
 @node Executable names, Configure and build in RAM, Installation directory, 
Configuring
@@ -6722,52 +5046,38 @@ same as a standard build and install, see @ref{Quick 
start}.
 
 @cindex Executable names
 @cindex Names of executables
-At first sight, the names of the executables for each program might seem to
-be uncommonly long, for example @command{astnoisechisel} or
-@command{astcrop}. We could have chosen terse (and cryptic) names like most
-programs do. We chose this complete naming convention (something like the
-commands in @TeX{}) so you don't have to spend too much time remembering
-what the name of a specific program was. Such complete names also enable
-you to easily search for the programs.
+At first sight, the names of the executables for each program might seem to be 
uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
+We could have chosen terse (and cryptic) names like most programs do.
+We chose this complete naming convention (something like the commands in 
@TeX{}) so you don't have to spend too much time remembering what the name of a 
specific program was.
+Such complete names also enable you to easily search for the programs.
 
 @cindex Shell auto-complete
 @cindex Auto-complete in the shell
-To facilitate typing the names in, we suggest using the shell
-auto-complete. With this facility you can find the executable you want
-very easily. It is very similar to file name completion in the
-shell. For example, simply by typing the letters below (where
-@key{[TAB]} stands for the Tab key on your keyboard)
+To facilitate typing the names in, we suggest using the shell auto-complete.
+With this facility you can find the executable you want very easily.
+It is very similar to file name completion in the shell.
+For example, simply by typing the letters below (where @key{[TAB]} stands for 
the Tab key on your keyboard)
 
 @example
 $ ast[TAB][TAB]
 @end example
 
 @noindent
-you will get the list of all the available executables that start with
-@command{ast} in your @command{PATH} environment variable
-directories. So, all the Gnuastro executables installed on your system
-will be listed. Typing the next letter for the specific program you
-want along with a Tab, will limit this list until you get to your
-desired program.
+you will get the list of all the available executables that start with 
@command{ast} in your @command{PATH} environment variable directories.
+So, all the Gnuastro executables installed on your system will be listed.
+Typing the next letter for the specific program you want along with a Tab, 
will limit this list until you get to your desired program.
 
 @cindex Names, customize
 @cindex Customize executable names
-In case all of this does not convince you and you still want to type
-short names, some suggestions are given below. You should have in mind
-though, that if you are writing a shell script that you might want to
-pass on to others, it is best to use the standard name because other
-users might not have adopted the same customization. The long names
-also serve as a form of documentation in such scripts. A similar
-reasoning can be given for option names in scripts: it is good
-practice to always use the long formats of the options in shell
-scripts, see @ref{Options}.
+In case all of this does not convince you and you still want to type short 
names, some suggestions are given below.
+You should have in mind though, that if you are writing a shell script that 
you might want to pass on to others, it is best to use the standard name 
because other users might not have adopted the same customization.
+The long names also serve as a form of documentation in such scripts.
+A similar reasoning can be given for option names in scripts: it is good 
practice to always use the long formats of the options in shell scripts, see 
@ref{Options}.
 
 @cindex Symbolic link
-The simplest solution is making a symbolic link to the actual
-executable. For example let's assume you want to type @file{ic} to run Crop
-instead of @file{astcrop}. Assuming you installed Gnuastro executables in
-@file{/usr/local/bin} (default) you can do this simply by running the
-following command as root:
+The simplest solution is making a symbolic link to the actual executable.
+For example let's assume you want to type @file{ic} to run Crop instead of 
@file{astcrop}.
+Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default) 
you can do this simply by running the following command as root:
 
 @example
 # ln -s /usr/local/bin/astcrop /usr/local/bin/ic
@@ -6781,32 +5091,21 @@ works.
 @vindex --program-prefix
 @vindex --program-suffix
 @vindex --program-transform-name
-The installed executable names can also be set using options to
-@command{$ ./configure}, see @ref{Configuring}. GNU Autoconf (which
-configures Gnuastro for your particular system), allows the builder
-to change the name of programs with the three options
-@option{--program-prefix}, @option{--program-suffix} and
-@option{--program-transform-name}. The first two are for adding a
-fixed prefix or suffix to all the programs that will be installed.
-This will actually make all the names longer!  You can use it to add
-versions of program names to the programs in order to simultaneously
-have two executable versions of a program.
+The installed executable names can also be set using options to @command{$ 
./configure}, see @ref{Configuring}.
+GNU Autoconf (which configures Gnuastro for your particular system), allows 
the builder to change the name of programs with the three options 
@option{--program-prefix}, @option{--program-suffix} and 
@option{--program-transform-name}.
+The first two are for adding a fixed prefix or suffix to all the programs that 
will be installed.
+This will actually make all the names longer!  You can use it to add versions 
of program names to the programs in order to simultaneously have two executable 
versions of a program.
 
 @cindex SED, stream editor
 @cindex Stream editor, SED
-The third configure option allows you to set the executable name at install
-time using the SED program. SED is a very useful `stream editor'. There are
-various resources on the internet to use it effectively. However, we should
-caution that using configure options will change the actual executable name
-of the installed program and on every re-install (an update for example),
-you have to also add this option to keep the old executable name
-updated. Also note that the documentation or configuration files do not
-change from their standard names either.
+The third configure option allows you to set the executable name at install 
time using the SED program.
+SED is a very useful `stream editor'.
+There are various resources on the internet to use it effectively.
+However, we should caution that using configure options will change the actual 
executable name of the installed program and on every re-install (an update for 
example), you have to also add this option to keep the old executable name 
updated.
+Also note that the documentation or configuration files do not change from 
their standard names either.
 
 @cindex Removing @file{ast} from executables
-For example, let's assume that typing @file{ast} on every invocation
-of every program is really annoying you! You can remove this prefix
-from all the executables at configure time by adding this option:
+For example, let's assume that typing @file{ast} on every invocation of every 
program is really annoying you! You can remove this prefix from all the 
executables at configure time by adding this option:
 
 @example
 $ ./configure --program-transform-name='s/ast/ /'
@@ -6819,11 +5118,9 @@ $ ./configure --program-transform-name='s/ast/ /'
 
 @cindex File I/O
 @cindex Input/Output, file
-Gnuastro's configure and build process (the GNU build system) involves the
-creation, reading, and modification of a large number of files
-(input/output, or I/O). Therefore file I/O issues can directly affect the
-work of developers who need to configure and build Gnuastro numerous
-times. Some of these issues are listed below:
+Gnuastro's configure and build process (the GNU build system) involves the 
creation, reading, and modification of a large number of files (input/output, 
or I/O).
+Therefore file I/O issues can directly affect the work of developers who need 
to configure and build Gnuastro numerous times.
+Some of these issues are listed below:
 
 @itemize
 @cindex HDD
@@ -6834,37 +5131,26 @@ SSDs (decreasing the lifetime).
 
 @cindex Backup
 @item
-Having the built files mixed with the source files can greatly affect
-backing up (synchronization) of source files (since it involves the
-management of a large number of small files that are regularly
-changed. Backup software can of course be configured to ignore the built
-files and directories. However, since the built files are mixed with the
-source files and can have a large variety, this will require a high level
-of customization.
+Having the built files mixed with the source files can greatly affect backing 
up (synchronization) of source files (since it involves the management of a 
large number of small files that are regularly changed.
+Backup software can of course be configured to ignore the built files and 
directories.
+However, since the built files are mixed with the source files and can have a 
large variety, this will require a high level of customization.
 @end itemize
 
 @cindex tmpfs file system
 @cindex file systems, tmpfs
-One solution to address both these problems is to use the
-@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}. Any file in
-tmpfs is actually stored in the RAM (and possibly SAWP), not on HDDs or
-SSDs. The RAM is built for extensive and fast I/O. Therefore the large
-number of file I/Os associated with configuring and building will not harm
-the HDDs or SSDs. Due to the volatile nature of RAM, files in the tmpfs
-file-system will be permanently lost after a power-off. Since all configured
-and built files are derivative files (not files that have been directly
-written by hand) there is no problem in this and this feature can be
-considered as an automatic cleanup.
+One solution to address both these problems is to use the 
@url{https://en.wikipedia.org/wiki/Tmpfs, tmpfs file system}.
+Any file in tmpfs is actually stored in the RAM (and possibly SAWP), not on 
HDDs or SSDs.
+The RAM is built for extensive and fast I/O.
+Therefore the large number of file I/Os associated with configuring and 
building will not harm the HDDs or SSDs.
+Due to the volatile nature of RAM, files in the tmpfs file-system will be 
permanently lost after a power-off.
+Since all configured and built files are derivative files (not files that have 
been directly written by hand) there is no problem in this and this feature can 
be considered as an automatic cleanup.
 
 @cindex Linux kernel
 @cindex GNU C library
 @cindex GNU build system
-The modern GNU C library (and thus the Linux kernel) defines the
-@file{/dev/shm} directory for this purpose in the RAM (POSIX shared
-memory). To build in it, you can use the GNU build system's ability to
-build in a separate directory (not necessarily in the source directory) as
-shown below. Just set @file{SRCDIR} as the address of Gnuastro's top source
-directory (for example, the unpacked tarball).
+The modern GNU C library (and thus the Linux kernel) defines the 
@file{/dev/shm} directory for this purpose in the RAM (POSIX shared memory).
+To build in it, you can use the GNU build system's ability to build in a 
separate directory (not necessarily in the source directory) as shown below.
+Just set @file{SRCDIR} as the address of Gnuastro's top source directory (for 
example, the unpacked tarball).
 
 @example
 $ mkdir /dev/shm/tmp-gnuastro-build
@@ -6873,144 +5159,111 @@ $ SRCDIR/configure --srcdir=SRCDIR
 $ make
 @end example
 
-Gnuastro comes with a script to simplify this process of configuring and
-building in a different directory (a ``clean'' build), for more see
-@ref{Separate build and source directories}.
+Gnuastro comes with a script to simplify this process of configuring and 
building in a different directory (a ``clean'' build), for more see 
@ref{Separate build and source directories}.
 
 @node Separate build and source directories, Tests, Configuring, Build and 
install
 @subsection Separate build and source directories
 
 The simple steps of @ref{Quick start} will mix the source and built files.
-This can cause inconvenience for developers or enthusiasts following the
-the most recent work (see @ref{Version controlled source}). The current
-section is mainly focused on this later group of Gnuastro users. If you
-just install Gnuastro on major releases (following @ref{Announcements}),
-you can safely ignore this section.
+This can cause inconvenience for developers or enthusiasts following the the 
most recent work (see @ref{Version controlled source}).
+The current section is mainly focused on this later group of Gnuastro users.
+If you just install Gnuastro on major releases (following 
@ref{Announcements}), you can safely ignore this section.
 
 @cindex GNU build system
-When it is necessary to keep the source (which is under version control),
-but not the derivative (built) files (after checking or installing), the
-best solution is to keep the source and the built files in separate
-directories. One application of this is already discussed in @ref{Configure
-and build in RAM}.
-
-To facilitate this process of configuring and building in a separate
-directory, Gnuastro comes with the @file{developer-build} script. It is
-available in the top source directory and is @emph{not} installed. It will
-make a directory under a given top-level directory (given to
-@option{--top-build-dir}) and build Gnuastro in there directory. It thus
-keeps the source completely separated from the built files. For easy access
-to the built files, it also makes a symbolic link to the built directory in
-the top source files called @file{build}.
-
-When run without any options, default values will be used for its
-configuration. As with Gnuastro's programs, you can inspect the default
-values with @option{-P} (or @option{--printparams}, the output just looks a
-little different here). The default top-level build directory is
-@file{/dev/shm}: the shared memory directory in RAM on GNU/Linux systems as
-described in @ref{Configure and build in RAM}.
+When it is necessary to keep the source (which is under version control), but 
not the derivative (built) files (after checking or installing), the best 
solution is to keep the source and the built files in separate directories.
+One application of this is already discussed in @ref{Configure and build in 
RAM}.
+
+To facilitate this process of configuring and building in a separate 
directory, Gnuastro comes with the @file{developer-build} script.
+It is available in the top source directory and is @emph{not} installed.
+It will make a directory under a given top-level directory (given to 
@option{--top-build-dir}) and build Gnuastro in there directory.
+It thus keeps the source completely separated from the built files.
+For easy access to the built files, it also makes a symbolic link to the built 
directory in the top source files called @file{build}.
+
+When run without any options, default values will be used for its 
configuration.
+As with Gnuastro's programs, you can inspect the default values with 
@option{-P} (or @option{--printparams}, the output just looks a little 
different here).
+The default top-level build directory is @file{/dev/shm}: the shared memory 
directory in RAM on GNU/Linux systems as described in @ref{Configure and build 
in RAM}.
 
 @cindex Debug
-Besides these, it also has some features to facilitate the job of
-developers or bleeding edge users like the @option{--debug} option to do a
-fast build, with debug information, no optimization, and no shared
-libraries. Here is the full list of options you can feed to this script to
-configure its operations.
+Besides these, it also has some features to facilitate the job of developers 
or bleeding edge users like the @option{--debug} option to do a fast build, 
with debug information, no optimization, and no shared libraries.
+Here is the full list of options you can feed to this script to configure its 
operations.
 
 @cartouche
 @noindent
 @strong{Not all Gnuastro's common program behavior usable here:}
-@file{developer-build} is just a non-installed script with a very limited
-scope as described above. It thus doesn't have all the common option
-behaviors or configuration files for example.
+@file{developer-build} is just a non-installed script with a very limited 
scope as described above.
+It thus doesn't have all the common option behaviors or configuration files 
for example.
 @end cartouche
 
 @cartouche
 @noindent
 @strong{White space between option and value:} @file{developer-build}
-doesn't accept an @key{=} sign between the options and their values. It
-also needs at least one character between the option and its
-value. Therefore @option{-n 4} or @option{--numthreads 4} are acceptable,
-while @option{-n4}, @option{-n=4}, or @option{--numthreads=4}
-aren't. Finally multiple short option names cannot be merged: for example
-you can say @option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4}
-is not acceptable.
+doesn't accept an @key{=} sign between the options and their values.
+It also needs at least one character between the option and its value.
+Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while 
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} aren't.
+Finally multiple short option names cannot be merged: for example you can say 
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not 
acceptable.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Reusable for other packages:} This script can be used in any
-software which is configured and built using the GNU Build System. Just
-copy it in the top source directory of that software and run it from there.
+@strong{Reusable for other packages:} This script can be used in any software 
which is configured and built using the GNU Build System.
+Just copy it in the top source directory of that software and run it from 
there.
 @end cartouche
 
 @table @option
 
 @item -b STR
 @itemx --top-build-dir STR
-The top build directory to make a directory for the build. If this option
-isn't called, the top build directory is @file{/dev/shm} (only available in
-GNU/Linux operating systems, see @ref{Configure and build in RAM}).
+The top build directory to make a directory for the build.
+If this option isn't called, the top build directory is @file{/dev/shm} (only 
available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
 
 @item -V
 @itemx --version
-Print the version string of Gnuastro that will be used in the build. This
-string will be appended to the directory name containing the built files.
+Print the version string of Gnuastro that will be used in the build.
+This string will be appended to the directory name containing the built files.
 
 @item -a
 @itemx --autoreconf
-Run @command{autoreconf -f} before building the package. In Gnuastro, this
-is necessary when a new commit has been made to the project history. In
-Gnuastro's build system, the Git description will be used as the version,
-see @ref{Version numbering} and @ref{Synchronizing}.
+Run @command{autoreconf -f} before building the package.
+In Gnuastro, this is necessary when a new commit has been made to the project 
history.
+In Gnuastro's build system, the Git description will be used as the version, 
see @ref{Version numbering} and @ref{Synchronizing}.
 
 @item -c
 @itemx --clean
 @cindex GNU Autoreconf
-Delete the contents of the build directory (clean it) before starting the
-configuration and building of this run.
+Delete the contents of the build directory (clean it) before starting the 
configuration and building of this run.
 
-This is useful when you have recently pulled changes from the main Git
-repository, or committed a change your self and ran @command{autoreconf -f},
-see @ref{Synchronizing}. After running GNU Autoconf, the version will be
-updated and you need to do a clean build.
+This is useful when you have recently pulled changes from the main Git 
repository, or committed a change your self and ran @command{autoreconf -f}, 
see @ref{Synchronizing}.
+After running GNU Autoconf, the version will be updated and you need to do a 
clean build.
 
 @item -d
 @itemx --debug
 @cindex Valgrind
 @cindex GNU Debugger (GDB)
-Build with debugging flags (for example to use in GNU Debugger, also known
-as GDB, or Valgrind), disable optimization and also the building of shared
-libraries. Similar to running the configure script of below
+Build with debugging flags (for example to use in GNU Debugger, also known as 
GDB, or Valgrind), disable optimization and also the building of shared 
libraries.
+Similar to running the configure script of below
 
 @example
 $ ./configure --enable-debug
 @end example
 
-Besides all the debugging advantages of building with this option, it will
-also be significantly speed up the build (at the cost of slower built
-programs). So when you are testing something small or working on the build
-system itself, it will be much faster to test your work with this option.
+Besides all the debugging advantages of building with this option, it will 
also be significantly speed up the build (at the cost of slower built programs).
+So when you are testing something small or working on the build system itself, 
it will be much faster to test your work with this option.
 
 @item -v
 @itemx --valgrind
 @cindex Valgrind
-Build all @command{make check} tests within Valgrind. For more, see the
-description of @option{--enable-check-with-valgrind} in @ref{Gnuastro
-configure options}.
+Build all @command{make check} tests within Valgrind.
+For more, see the description of @option{--enable-check-with-valgrind} in 
@ref{Gnuastro configure options}.
 
 @item -j INT
 @itemx --jobs INT
-The maximum number of threads/jobs for Make to build at any moment. As the
-name suggests (Make has an identical option), the number given to this
-option is directly passed on to any call of Make with its @option{-j}
-option.
+The maximum number of threads/jobs for Make to build at any moment.
+As the name suggests (Make has an identical option), the number given to this 
option is directly passed on to any call of Make with its @option{-j} option.
 
 @item -C
 @itemx --check
-After finishing the build, also run @command{make check}. By default,
-@command{make check} isn't run because the developer usually has their own
-checks to work on (for example defined in @file{tests/during-dev.sh}).
+After finishing the build, also run @command{make check}.
+By default, @command{make check} isn't run because the developer usually has 
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
 
 @item -i
 @itemx --install
@@ -7018,44 +5271,32 @@ After finishing the build, also run @command{make 
install}.
 
 @item -D
 @itemx --dist
-Run @code{make dist-lzip pdf} to build a distribution tarball (in
-@file{.tar.lz} format) and a PDF manual. This can be useful for archiving,
-or sending to colleagues who don't use Git for an easy build and manual.
+Run @code{make dist-lzip pdf} to build a distribution tarball (in 
@file{.tar.lz} format) and a PDF manual.
+This can be useful for archiving, or sending to colleagues who don't use Git 
for an easy build and manual.
 
 @item -u STR
 @item --upload STR
-Activate the @option{--dist} (@option{-D}) option, then use secure copy
-(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the
-@file{src} and @file{pdf} sub-directories of the specified server and its
-directory (value to this option). For example @command{--upload
-my-server:dir}, will copy the tarball in the @file{dir/src}, and the PDF
-manual in @file{dir/pdf} of @code{my-server} server. It will then make a
-symbolic link in the top server directory to the tarball that is called
-@file{gnuastro-latest.tar.lz}.
+Activate the @option{--dist} (@option{-D}) option, then use secure copy 
(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the 
@file{src} and @file{pdf} sub-directories of the specified server and its 
directory (value to this option).
+For example @command{--upload my-server:dir}, will copy the tarball in the 
@file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
+It will then make a symbolic link in the top server directory to the tarball 
that is called @file{gnuastro-latest.tar.lz}.
 
 @item -p
 @itemx --publish
-Short for @option{--autoreconf --clean --debug --check --upload
-STR}. @option{--debug} is added because it will greatly speed up the
-build. It will have no effect on the produced tarball. This is good when
-you have made a commit and are ready to publish it on your server (if
-nothing crashes). Recall that if any of the previous steps fail the script
-aborts.
+Short for @option{--autoreconf --clean --debug --check --upload STR}.
+@option{--debug} is added because it will greatly speed up the build.
+It will have no effect on the produced tarball.
+This is good when you have made a commit and are ready to publish it on your 
server (if nothing crashes).
+Recall that if any of the previous steps fail the script aborts.
 
 @item -I
 @item --install-archive
-Short for @option{--autoreconf --clean --check --install --dist}. This is
-useful when you actually want to install the commit you just made (if the
-build and checks succeed). It will also produce a distribution tarball and
-PDF manual for easy access to the installed tarball on your system at a
-later time.
-
-Ideally, Gnuastro's Git version history makes it easy for a prepared system
-to revert back to a different point in history. But Gnuastro also needs to
-bootstrap files and also your collaborators might (usually do!) find it too
-much of a burden to do the bootstrapping themselves. So it is convenient to
-have a tarball and PDF manual of the version you have installed (and are
-using in your research) handily available.
+Short for @option{--autoreconf --clean --check --install --dist}.
+This is useful when you actually want to install the commit you just made (if 
the build and checks succeed).
+It will also produce a distribution tarball and PDF manual for easy access to 
the installed tarball on your system at a later time.
+
+Ideally, Gnuastro's Git version history makes it easy for a prepared system to 
revert back to a different point in history.
+But Gnuastro also needs to bootstrap files and also your collaborators might 
(usually do!) find it too much of a burden to do the bootstrapping themselves.
+So it is convenient to have a tarball and PDF manual of the version you have 
installed (and are using in your research) handily available.
 
 @item -h
 @itemx --help
@@ -7075,35 +5316,26 @@ current values.
 @cindex @file{mock.fits}
 @cindex Tests, running
 @cindex Checking tests
-After successfully building (compiling) the programs with the @command{$
-make} command you can check the installation before installing. To run the
-tests, run
+After successfully building (compiling) the programs with the @command{$ make} 
command you can check the installation before installing.
+To run the tests, run
 
 @example
 $ make check
 @end example
 
-For every program some tests are designed to check some possible
-operations. Running the command above will run those tests and give
-you a final report. If everything is OK and you have built all the
-programs, all the tests should pass. In case any of the tests fail,
-please have a look at @ref{Known issues} and if that still doesn't fix
-your problem, look that the @file{./tests/test-suite.log} file to see
-if the source of the error is something particular to your system or
-more general. If you feel it is general, please contact us because it
-might be a bug. Note that the tests of some programs depend on the
-outputs of other program's tests, so if you have not installed them
-they might be skipped or fail. Prior to releasing every distribution
-all these tests are checked. If you have a reasonably modern terminal,
-the outputs of the successful tests will be colored green and the
-failed ones will be colored red.
+For every program some tests are designed to check some possible operations.
+Running the command above will run those tests and give you a final report.
+If everything is OK and you have built all the programs, all the tests should 
pass.
+In case any of the tests fail, please have a look at @ref{Known issues} and if 
that still doesn't fix your problem, look that the 
@file{./tests/test-suite.log} file to see if the source of the error is 
something particular to your system or more general.
+If you feel it is general, please contact us because it might be a bug.
+Note that the tests of some programs depend on the outputs of other program's 
tests, so if you have not installed them they might be skipped or fail.
+Prior to releasing every distribution all these tests are checked.
+If you have a reasonably modern terminal, the outputs of the successful tests 
will be colored green and the failed ones will be colored red.
 
-These scripts can also act as a good set of examples for you to see how the
-programs are run. All the tests are in the @file{tests/} directory. The
-tests for each program are shell scripts (ending with @file{.sh}) in a
-sub-directory of this directory with the same name as the program. See
-@ref{Test scripts} for more detailed information about these scripts in case
-you want to inspect them.
+These scripts can also act as a good set of examples for you to see how the 
programs are run.
+All the tests are in the @file{tests/} directory.
+The tests for each program are shell scripts (ending with @file{.sh}) in a 
sub-directory of this directory with the same name as the program.
+See @ref{Test scripts} for more detailed information about these scripts in 
case you want to inspect them.
 
 
 
@@ -7117,46 +5349,40 @@ you want to inspect them.
 @cindex US letter paper size
 @cindex Paper size, A4
 @cindex Paper size, US letter
-The default print version of this book is provided in the letter paper
-size. If you would like to have the print version of this book on
-paper and you are living in a country which uses A4, then you can
-rebuild the book. The great thing about the GNU build system is that
-the book source code which is in Texinfo is also distributed with
-the program source code, enabling you to do such customization
-(hacking).
+The default print version of this book is provided in the letter paper size.
+If you would like to have the print version of this book on paper and you are 
living in a country which uses A4, then you can rebuild the book.
+The great thing about the GNU build system is that the book source code which 
is in Texinfo is also distributed with the program source code, enabling you to 
do such customization (hacking).
 
 @cindex GNU Texinfo
-In order to change the paper size, you will need to have GNU Texinfo
-installed. Open @file{doc/gnuastro.texi} with any text editor. This is the
-source file that created this book. In the first few lines you will see
-this line:
+In order to change the paper size, you will need to have GNU Texinfo installed.
+Open @file{doc/gnuastro.texi} with any text editor.
+This is the source file that created this book.
+In the first few lines you will see this line:
 
 @example
 @@c@@afourpaper
 @end example
 
 @noindent
-In Texinfo, a line is commented with @code{@@c}. Therefore, un-comment this
-line by deleting the first two characters such that it changes to:
+In Texinfo, a line is commented with @code{@@c}.
+Therefore, un-comment this line by deleting the first two characters such that 
it changes to:
 
 @example
 @@afourpaper
 @end example
 
 @noindent
-Save the file and close it. You can now run
+Save the file and close it.
+You can now run the following command
 
 @example
 $ make pdf
 @end example
 
 @noindent
-and the new PDF book will be available in
-@file{SRCdir/doc/gnuastro.pdf}. By changing the @command{pdf} in
-@command{$ make pdf} to @command{ps} or @command{dvi} you can have the
-book in those formats. Note that you can do this for any book that
-is in Texinfo format, they might not have @code{@@afourpaper} line, so
-you can add it close to the top of the Texinfo source file.
+and the new PDF book will be available in @file{SRCdir/doc/gnuastro.pdf}.
+By changing the @command{pdf} in @command{$ make pdf} to @command{ps} or 
@command{dvi} you can have the book in those formats.
+Note that you can do this for any book that is in Texinfo format, they might 
not have @code{@@afourpaper} line, so you can add it close to the top of the 
Texinfo source file.
 
 
 
@@ -7164,37 +5390,32 @@ you can add it close to the top of the Texinfo source 
file.
 @node Known issues,  , A4 print book, Build and install
 @subsection Known issues
 
-Depending on your operating system and the version of the compiler you
-are using, you might confront some known problems during the
-configuration (@command{$ ./configure}), compilation (@command{$
-make}) and tests (@command{$ make check}). Here, their solutions are
-discussed.
+Depending on your operating system and the version of the compiler you are 
using, you might confront some known problems during the configuration 
(@command{$ ./configure}), compilation (@command{$ make}) and tests (@command{$ 
make check}).
+Here, their solutions are discussed.
 
 @itemize
 @cindex Configuration, not finding library
 @cindex Development packages
 @item
-@command{$ ./configure}: @emph{Configure complains about not finding a
-library even though you have installed it.} The possible solution is
-based on how you installed the package:
+@command{$ ./configure}: @emph{Configure complains about not finding a library 
even though you have installed it.}
+The possible solution is based on how you installed the package:
 
 @itemize
 @item
-From your distribution's package manager. Most probably this is
-because your distribution has separated the header files of a library
-from the library parts. Please also install the `development' packages
-for those libraries too. Just add a @file{-dev} or @file{-devel} to
-the end of the package name and re-run the package manager. This will
-not happen if you install the libraries from source. When installed
-from source, the headers are also installed.
+From your distribution's package manager.
+Most probably this is because your distribution has separated the header files 
of a library from the library parts.
+Please also install the `development' packages for those libraries too.
+Just add a @file{-dev} or @file{-devel} to the end of the package name and 
re-run the package manager.
+This will not happen if you install the libraries from source.
+When installed from source, the headers are also installed.
 
 @item
 @cindex @command{LDFLAGS}
-From source. Then your linker is not looking where you installed the
-library. If you followed the instructions in this chapter, all the
-libraries will be installed in @file{/usr/local/lib}. So you have to tell
-your linker to look in this directory. To do so, configure Gnuastro like
-this:
+From source.
+Then your linker is not looking where you installed the library.
+If you followed the instructions in this chapter, all the libraries will be 
installed in @file{/usr/local/lib}.
+So you have to tell your linker to look in this directory.
+To do so, configure Gnuastro like this:
 
 @example
 $ ./configure LDFLAGS="-L/usr/local/lib"
@@ -7210,96 +5431,64 @@ directory}.
 @vindex --enable-gnulibcheck
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-@command{$ make}: @emph{Complains about an unknown function on a
-non-GNU based operating system.} In this case, please run @command{$
-./configure} with the @option{--enable-gnulibcheck} option to see if
-the problem is from the GNU Portability Library (Gnulib) not
-supporting your system or if there is a problem in Gnuastro, see
-@ref{Gnuastro configure options}. If the problem is not
-in Gnulib
-and after all its tests you get the same complaint from
-@command{make}, then please contact us at
-@file{bug-gnuastro@@gnu.org}. The cause is probably that a function
-that we have used is not supported by your operating system and we
-didn't included it along with the source tar ball. If the function is
-available in Gnulib, it can be fixed immediately.
+@command{$ make}: @emph{Complains about an unknown function on a non-GNU based 
operating system.}
+In this case, please run @command{$ ./configure} with the 
@option{--enable-gnulibcheck} option to see if the problem is from the GNU 
Portability Library (Gnulib) not supporting your system or if there is a 
problem in Gnuastro, see @ref{Gnuastro configure options}.
+If the problem is not in Gnulib and after all its tests you get the same 
complaint from @command{make}, then please contact us at 
@file{bug-gnuastro@@gnu.org}.
+The cause is probably that a function that we have used is not supported by 
your operating system and we didn't included it along with the source tar ball.
+If the function is available in Gnulib, it can be fixed immediately.
 
 @item
 @cindex @command{CPPFLAGS}
-@command{$ make}: @emph{Can't find the headers (.h files) of installed
-libraries.} Your C pre-processor (CPP) isn't looking in the right place. To
-fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below
-(assuming the library is installed in @file{/usr/local/include}:
+@command{$ make}: @emph{Can't find the headers (.h files) of installed 
libraries.}
+Your C pre-processor (CPP) isn't looking in the right place.
+To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below 
(assuming the library is installed in @file{/usr/local/include}:
 
 @example
 $ ./configure CPPFLAGS="-I/usr/local/include"
 @end example
 
-If you want to use the libraries for your other programming projects, then
-export this environment variable in a start-up script similar to the case
-for @file{LD_LIBRARY_PATH} explained below, also see @ref{Installation
-directory}.
+If you want to use the libraries for your other programming projects, then 
export this environment variable in a start-up script similar to the case for 
@file{LD_LIBRARY_PATH} explained below, also see @ref{Installation directory}.
 
 @cindex Tests, only one passes
 @cindex @file{LD_LIBRARY_PATH}
 @item
-@command{$ make check}: @emph{Only the first couple of tests pass, all the
-rest fail or get skipped.}  It is highly likely that when searching for
-shared libraries, your system doesn't look into the @file{/usr/local/lib}
-directory (or wherever you installed Gnuastro or its dependencies). To make
-sure it is added to the list of directories, add the following line to your
-@file{~/.bashrc} file and restart your terminal. Don't forget to change
-@file{/usr/local/lib} if the libraries are installed in other
-(non-standard) directories.
+@command{$ make check}: @emph{Only the first couple of tests pass, all the 
rest fail or get skipped.}  It is highly likely that when searching for shared 
libraries, your system doesn't look into the @file{/usr/local/lib} directory 
(or wherever you installed Gnuastro or its dependencies).
+To make sure it is added to the list of directories, add the following line to 
your @file{~/.bashrc} file and restart your terminal.
+Don't forget to change @file{/usr/local/lib} if the libraries are installed in 
other (non-standard) directories.
 
 @example
 export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
 @end example
 
-You can also add more directories by using a colon `@code{:}' to separate
-them. See @ref{Installation directory} and @ref{Linking} to learn more on
-the @code{PATH} variables and dynamic linking respectively.
+You can also add more directories by using a colon `@code{:}' to separate them.
+See @ref{Installation directory} and @ref{Linking} to learn more on the 
@code{PATH} variables and dynamic linking respectively.
 
 @cindex GPL Ghostscript
 @item
-@command{$ make check}: @emph{The tests relying on external programs
-(for example @file{fitstopdf.sh} fail.}) This is probably due to the
-fact that the version number of the external programs is too old for
-the tests we have preformed. Please update the program to a more
-recent version. For example to create a PDF image, you will need GPL
-Ghostscript, but older versions do not work, we have successfully
-tested it on version 9.15. Older versions might cause a failure in the
-test result.
+@command{$ make check}: @emph{The tests relying on external programs (for 
example @file{fitstopdf.sh} fail.}) This is probably due to the fact that the 
version number of the external programs is too old for the tests we have 
preformed.
+Please update the program to a more recent version.
+For example to create a PDF image, you will need GPL Ghostscript, but older 
versions do not work, we have successfully tested it on version 9.15.
+Older versions might cause a failure in the test result.
 
 @item
 @cindex @TeX{}
 @cindex GNU Texinfo
-@command{$ make pdf}: @emph{The PDF book cannot be made.} To make a
-PDF book, you need to have the GNU Texinfo program (like any
-program, the more recent the better). A working @TeX{} program is also
-necessary, which you can get from Tex
-Live@footnote{@url{https://www.tug.org/texlive/}}.
+@command{$ make pdf}: @emph{The PDF book cannot be made.}
+To make a PDF book, you need to have the GNU Texinfo program (like any 
program, the more recent the better).
+A working @TeX{} program is also necessary, which you can get from Tex 
Live@footnote{@url{https://www.tug.org/texlive/}}.
 
 @item
 @cindex GNU Libtool
-After @code{make check}: do not copy the programs' executables to another
-(for example, the installation) directory manually (using @command{cp}, or
-@command{mv} for example). In the default configuration@footnote{If you
-configure Gnuastro with the @option{--disable-shared} option, then the
-libraries will be statically linked to the programs and this problem won't
-exist, see @ref{Linking}.}, the program binaries need to link with
-Gnuastro's shared library which is also built and installed with the
-programs. Therefore, to run successfully before and after installation,
-linking modifications need to be made by GNU Libtool at installation
-time. @command{make install} does this internally, but a simple copy might
-give linking errors when you run it. If you need to copy the executables,
-you can do so after installation.
+After @code{make check}: do not copy the programs' executables to another (for 
example, the installation) directory manually (using @command{cp}, or 
@command{mv} for example).
+In the default configuration@footnote{If you configure Gnuastro with the 
@option{--disable-shared} option, then the libraries will be statically linked 
to the programs and this problem won't exist, see @ref{Linking}.}, the program 
binaries need to link with Gnuastro's shared library which is also built and 
installed with the programs.
+Therefore, to run successfully before and after installation, linking 
modifications need to be made by GNU Libtool at installation time.
+@command{make install} does this internally, but a simple copy might give 
linking errors when you run it.
+If you need to copy the executables, you can do so after installation.
 
 @end itemize
 
 @noindent
-If your problem was not listed above, please file a bug report
-(@ref{Report a bug}).
+If your problem was not listed above, please file a bug report (@ref{Report a 
bug}).
 
 
 
@@ -7322,52 +5511,27 @@ If your problem was not listed above, please file a bug 
report
 @node Common program behavior, Data containers, Installation, Top
 @chapter Common program behavior
 
-All the programs in Gnuastro share a set of common behavior mainly to do
-with user interaction to facilitate their usage and development. This
-includes how to feed input datasets into the programs, how to configure
-them, specifying the outputs, numerical data types, treating columns of
-information in tables, etc. This chapter is devoted to describing this
-common behavior in all programs. Because the behaviors discussed here are
-common to several programs, they are not repeated in each program's
-description.
-
-In @ref{Command-line}, a very general description of running the programs
-on the command-line is discussed, like difference between arguments and
-options, as well as options that are common/shared between all
-programs. None of Gnuastro's programs keep any internal configuration value
-(values for their different operational steps), they read their
-configuration primarily from the command-line, then from specific files in
-directory, user, or system-wide settings. Using these configuration files
-can greatly help reproducible and robust usage of Gnuastro, see
-@ref{Configuration files} for more.
-
-It is not possible to always have the different options and configurations
-of each program on the top of your head. It is very natural to forget the
-options of a program, their current default values, or how it should be run
-and what it did. Gnuastro's programs have multiple ways to help you refresh
-your memory in multiple levels (just an option name, a short description,
-or fast access to the relevant section of the manual. See @ref{Getting
-help} for more for more on benefiting from this very convenient feature.
-
-Many of the programs use the multi-threaded character of modern CPUs, in
-@ref{Multi-threaded operations} we'll discuss how you can configure this
-behavior, along with some tips on making best use of them. In @ref{Numeric
-data types}, we'll review the various types to store numbers in your
-datasets: setting the proper type for the usage context@footnote{For
-example if the values in your dataset can only be integers between 0 or
-65000, store them in a unsigned 16-bit type, not 64-bit floating point type
-(which is the default in most systems). It takes four times less space and
-is much faster to process.} can greatly improve the file size and also speed
-of reading, writing or processing them.
-
-We'll then look into the recognized table formats in @ref{Tables} and how
-large datasets are broken into tiles, or mesh grid in
-@ref{Tessellation}. Finally, we'll take a look at the behavior regarding
-output files: @ref{Automatic output} describes how the programs set a
-default name for their output when you don't give one explicitly (using
-@option{--output}). When the output is a FITS file, all the programs also
-store some very useful information in the header that is discussed in
-@ref{Output FITS files}.
+All the programs in Gnuastro share a set of common behavior mainly to do with 
user interaction to facilitate their usage and development.
+This includes how to feed input datasets into the programs, how to configure 
them, specifying the outputs, numerical data types, treating columns of 
information in tables, etc.
+This chapter is devoted to describing this common behavior in all programs.
+Because the behaviors discussed here are common to several programs, they are 
not repeated in each program's description.
+
+In @ref{Command-line}, a very general description of running the programs on 
the command-line is discussed, like difference between arguments and options, 
as well as options that are common/shared between all programs.
+None of Gnuastro's programs keep any internal configuration value (values for 
their different operational steps), they read their configuration primarily 
from the command-line, then from specific files in directory, user, or 
system-wide settings.
+Using these configuration files can greatly help reproducible and robust usage 
of Gnuastro, see @ref{Configuration files} for more.
+
+It is not possible to always have the different options and configurations of 
each program on the top of your head.
+It is very natural to forget the options of a program, their current default 
values, or how it should be run and what it did.
+Gnuastro's programs have multiple ways to help you refresh your memory in 
multiple levels (just an option name, a short description, or fast access to 
the relevant section of the manual.
+See @ref{Getting help} for more for more on benefiting from this very 
convenient feature.
+
+Many of the programs use the multi-threaded character of modern CPUs, in 
@ref{Multi-threaded operations} we'll discuss how you can configure this 
behavior, along with some tips on making best use of them.
+In @ref{Numeric data types}, we'll review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{For 
example if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
+It takes four times less space and is much faster to process.} can greatly 
improve the file size and also speed of reading, writing or processing them.
+
+We'll then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
+Finally, we'll take a look at the behavior regarding output files: 
@ref{Automatic output} describes how the programs set a default name for their 
output when you don't give one explicitly (using @option{--output}).
+When the output is a FITS file, all the programs also store some very useful 
information in the header that is discussed in @ref{Output FITS files}.
 
 @menu
 * Command-line::                How to use the command-line.
@@ -7385,13 +5549,10 @@ store some very useful information in the header that 
is discussed in
 @node Command-line, Configuration files, Common program behavior, Common 
program behavior
 @section Command-line
 
-Gnuastro's programs are customized through the standard Unix-like
-command-line environment and GNU style command-line options. Both are very
-common in many Unix-like operating system programs. In @ref{Arguments and
-options} we'll start with the difference between arguments and options and
-elaborate on the GNU style of options. Afterwards, in @ref{Common options},
-we'll go into the detailed list of all the options that are common to all
-the programs in Gnuastro.
+Gnuastro's programs are customized through the standard Unix-like command-line 
environment and GNU style command-line options.
+Both are very common in many Unix-like operating system programs.
+In @ref{Arguments and options} we'll start with the difference between 
arguments and options and elaborate on the GNU style of options.
+Afterwards, in @ref{Common options}, we'll go into the detailed list of all 
the options that are common to all the programs in Gnuastro.
 
 @menu
 * Arguments and options::       Different ways to specify inputs and 
configuration.
@@ -7407,73 +5568,47 @@ the programs in Gnuastro.
 @cindex Command-line options
 @cindex Arguments to programs
 @cindex Command-line arguments
-When you type a command on the command-line, it is passed onto the shell (a
-generic name for the program that manages the command-line) as a string of
-characters. As an example, see the ``Invoking ProgramName'' sections in
-this manual for some examples of commands with each program, like
-@ref{Invoking asttable}, @ref{Invoking astfits}, or @ref{Invoking
-aststatistics}.
+When you type a command on the command-line, it is passed onto the shell (a 
generic name for the program that manages the command-line) as a string of 
characters.
+As an example, see the ``Invoking ProgramName'' sections in this manual for 
some examples of commands with each program, like @ref{Invoking asttable}, 
@ref{Invoking astfits}, or @ref{Invoking aststatistics}.
 
-The shell then brakes up your string into separate @emph{tokens} or
-@emph{words} using any @emph{metacharacters} (like white-space, tab,
-@command{|}, @command{>} or @command{;}) that are in the string. On the
-command-line, the first thing you usually enter is the name of the program
-you want to run. After that, you can specify two types of tokens:
-@emph{arguments} and @emph{options}. In the GNU-style, arguments are those
-tokens that are not preceded by any hyphens (@command{-}, see
-@ref{Arguments}). Here is one example:
+The shell then brakes up your string into separate @emph{tokens} or 
@emph{words} using any @emph{metacharacters} (like white-space, tab, 
@command{|}, @command{>} or @command{;}) that are in the string.
+On the command-line, the first thing you usually enter is the name of the 
program you want to run.
+After that, you can specify two types of tokens: @emph{arguments} and 
@emph{options}.
+In the GNU-style, arguments are those tokens that are not preceded by any 
hyphens (@command{-}, see @ref{Arguments}).
+Here is one example:
 
 @example
 $ astcrop --center=53.162551,-27.789676 -w10/3600 --mode=wcs udf.fits
 @end example
 
-In the example above, we are running @ref{Crop} to crop a region of width
-10 arc-seconds centered at the given RA and Dec from the input Hubble
-Ultra-Deep Field (UDF) FITS image. Here, the argument is
-@file{udf.fits}. Arguments are most commonly the input file names
-containing your data. Options start with one or two hyphens, followed by an
-identifier for the option (the option's name, for example,
-@option{--center}, @option{-w}, @option{--mode} in the example above) and
-its value (anything after the option name, or the optional @key{=}
-character). Through options you can configure how the program runs
-(interprets the data you provided).
+In the example above, we are running @ref{Crop} to crop a region of width 10 
arc-seconds centered at the given RA and Dec from the input Hubble Ultra-Deep 
Field (UDF) FITS image.
+Here, the argument is @file{udf.fits}.
+Arguments are most commonly the input file names containing your data.
+Options start with one or two hyphens, followed by an identifier for the 
option (the option's name, for example, @option{--center}, @option{-w}, 
@option{--mode} in the example above) and its value (anything after the option 
name, or the optional @key{=} character).
+Through options you can configure how the program runs (interprets the data 
you provided).
 
 @vindex --help
 @vindex --usage
 @cindex Mandatory arguments
-Arguments can be mandatory and optional and unlike options, they don't have
-any identifiers. Hence, when there multiple arguments, their order might
-also matter (for example in @command{cp} which is used for copying one file
-to another location). The outputs of @option{--usage} and @option{--help}
-shows which arguments are optional and which are mandatory, see
-@ref{--usage}.
-
-As their name suggests, @emph{options} can be considered to be optional and
-most of the time, you don't have to worry about what order you specify them
-in. When the order does matter, or the option can be invoked multiple
-times, it is explicitly mentioned in the ``Invoking ProgramName'' section
-of each program (this is a very important aspect of an option).
-
-@cindex Metacharacters on the command-line
-In case your arguments or option values contain any of the shell's
-meta-characters, you have to quote them. If there is only one such
-character, you can use a backslash (@command{\}) before it. If there are
-multiple, it might be easier to simply put your whole argument or option
-value inside of double quotes (@command{"}). In such cases, everything
-inside the double quotes will be seen as one token or word.
+Arguments can be mandatory and optional and unlike options, they don't have 
any identifiers.
+Hence, when there multiple arguments, their order might also matter (for 
example in @command{cp} which is used for copying one file to another location).
+The outputs of @option{--usage} and @option{--help} shows which arguments are 
optional and which are mandatory, see @ref{--usage}.
+
+As their name suggests, @emph{options} can be considered to be optional and 
most of the time, you don't have to worry about what order you specify them in.
+When the order does matter, or the option can be invoked multiple times, it is 
explicitly mentioned in the ``Invoking ProgramName'' section of each program 
(this is a very important aspect of an option).
+
+@cindex Metacharacters on the command-line In case your arguments or option 
values contain any of the shell's meta-characters, you have to quote them.
+If there is only one such character, you can use a backslash (@command{\}) 
before it.
+If there are multiple, it might be easier to simply put your whole argument or 
option value inside of double quotes (@command{"}).
+In such cases, everything inside the double quotes will be seen as one token 
or word.
 
 @cindex HDU
 @cindex Header data unit
-For example, let's say you want to specify the header data unit (HDU) of
-your FITS file using a complex expression like `@command{3; images(exposure
-> 100)}'. If you simply add these after the @option{--hdu} (@option{-h})
-option, the programs in Gnuastro will read the value to the HDU option as
-`@command{3}' and run. Then, the shell will attempt to run a separate
-command `@command{images(exposure > 100)}' and complain about a syntax
-error. This is because the semicolon (@command{;}) is an `end of command'
-character in the shell. To solve this problem you can simply put double
-quotes around the whole string you want to pass to @option{--hdu} as seen
-below:
+For example, let's say you want to specify the header data unit (HDU) of your 
FITS file using a complex expression like `@command{3; images(exposure > 100)}'.
+If you simply add these after the @option{--hdu} (@option{-h}) option, the 
programs in Gnuastro will read the value to the HDU option as `@command{3}' and 
run.
+Then, the shell will attempt to run a separate command 
`@command{images(exposure > 100)}' and complain about a syntax error.
+This is because the semicolon (@command{;}) is an `end of command' character 
in the shell.
+To solve this problem you can simply put double quotes around the whole string 
you want to pass to @option{--hdu} as seen below:
 @example
 $ astcrop --hdu="3; images(exposure > 100)" image.fits
 @end example
@@ -7489,21 +5624,14 @@ $ astcrop --hdu="3; images(exposure > 100)" image.fits
 
 @node Arguments, Options, Arguments and options, Arguments and options
 @subsubsection Arguments
-In Gnuastro, arguments are almost exclusively used as the input data file
-names. Please consult the first few paragraph of the ``Invoking
-ProgramName'' section for each program for a description of what it expects
-as input, how many arguments, or input data, it accepts, or in what
-order. Everything particular about how a program treats arguments, is
-explained under the ``Invoking ProgramName'' section for that program.
-
-Generally, if there is a standard file name extension for a particular
-format, that filename extension is used to separate the kinds of
-arguments. The list below shows the data formats that are recognized in
-Gnuastro's programs based on their file name endings. Any argument that
-doesn't end with the specified extensions below is considered to be a text
-file (usually catalogs, see @ref{Tables}). In some cases, a program
-can accept specific formats, for example @ref{ConvertType} also accepts
-@file{.jpg} images.
+In Gnuastro, arguments are almost exclusively used as the input data file 
names.
+Please consult the first few paragraph of the ``Invoking ProgramName'' section 
for each program for a description of what it expects as input, how many 
arguments, or input data, it accepts, or in what order.
+Everything particular about how a program treats arguments, is explained under 
the ``Invoking ProgramName'' section for that program.
+
+Generally, if there is a standard file name extension for a particular format, 
that filename extension is used to separate the kinds of arguments.
+The list below shows the data formats that are recognized in Gnuastro's 
programs based on their file name endings.
+Any argument that doesn't end with the specified extensions below is 
considered to be a text file (usually catalogs, see @ref{Tables}).
+In some cases, a program can accept specific formats, for example 
@ref{ConvertType} also accepts @file{.jpg} images.
 
 @cindex Astronomical data suffixes
 @cindex Suffixes, astronomical data
@@ -7529,14 +5657,9 @@ can accept specific formats, for example 
@ref{ConvertType} also accepts
 
 @end itemize
 
-Through out this book and in the command-line outputs, whenever we
-want to generalize all such astronomical data formats in a text place
-holder, we will use @file{ASTRdata}, we will assume that the extension
-is also part of this name. Any file ending with these names is
-directly passed on to CFITSIO to read. Therefore you don't necessarily
-have to have these files on your computer, they can also be located on
-an FTP or HTTP server too, see the CFITSIO manual for more
-information.
+Through out this book and in the command-line outputs, whenever we want to 
generalize all such astronomical data formats in a text place holder, we will 
use @file{ASTRdata}, we will assume that the extension is also part of this 
name.
+Any file ending with these names is directly passed on to CFITSIO to read.
+Therefore you don't necessarily have to have these files on your computer, 
they can also be located on an FTP or HTTP server too, see the CFITSIO manual 
for more information.
 
 CFITSIO has its own error reporting techniques, if your input file(s)
 cannot be opened, or read, those errors will be printed prior to the
@@ -7550,35 +5673,26 @@ final error by Gnuastro.
 @cindex GNU style options
 @cindex Options, GNU style
 @cindex Options, short (@option{-}) and long (@option{--})
-Command-line options allow configuring the behavior of a program in all
-GNU/Linux applications for each particular execution on a particular input
-data. A single option can be called in two ways: @emph{long} or
-@emph{short}. All options in Gnuastro accept the long format which has two
-hyphens an can have many characters (for example @option{--hdu}). Short
-options only have one hyphen (@key{-}) followed by one character (for
-example @option{-h}). You can see some examples in the list of options in
-@ref{Common options} or those for each program's ``Invoking ProgramName''
-section. Both formats are shown for those which support both. First the
-short is shown then the long.
-
-Usually, the short options are for when you are writing on the command-line
-and want to save keystrokes and time. The long options are good for shell
-scripts, where you aren't usually rushing. Long options provide a level of
-documentation, since they are more descriptive and less cryptic. Usually
-after a few months of not running a program, the short options will be
-forgotten and reading your previously written script will not be easy.
+Command-line options allow configuring the behavior of a program in all 
GNU/Linux applications for each particular execution on a particular input data.
+A single option can be called in two ways: @emph{long} or @emph{short}.
+All options in Gnuastro accept the long format which has two hyphens an can 
have many characters (for example @option{--hdu}).
+Short options only have one hyphen (@key{-}) followed by one character (for 
example @option{-h}).
+You can see some examples in the list of options in @ref{Common options} or 
those for each program's ``Invoking ProgramName'' section.
+Both formats are shown for those which support both.
+First the short is shown then the long.
+
+Usually, the short options are for when you are writing on the command-line 
and want to save keystrokes and time.
+The long options are good for shell scripts, where you aren't usually rushing.
+Long options provide a level of documentation, since they are more descriptive 
and less cryptic.
+Usually after a few months of not running a program, the short options will be 
forgotten and reading your previously written script will not be easy.
 
 @cindex On/Off options
 @cindex Options, on/off
-Some options need to be given a value if they are called and some
-don't. You can think of the latter type of options as on/off options. These
-two types of options can be distinguished using the output of the
-@option{--help} and @option{--usage} options, which are common to all GNU
-software, see @ref{Getting help}. In Gnuastro we use the following strings
-to specify when the option needs a value and what format that value should
-be in. More specific tests will be done in the program and if the values
-are out of range (for example negative when the program only wants a
-positive value), an error will be reported.
+Some options need to be given a value if they are called and some don't.
+You can think of the latter type of options as on/off options.
+These two types of options can be distinguished using the output of the 
@option{--help} and @option{--usage} options, which are common to all GNU 
software, see @ref{Getting help}.
+In Gnuastro we use the following strings to specify when the option needs a 
value and what format that value should be in.
+More specific tests will be done in the program and if the values are out of 
range (for example negative when the program only wants a positive value), an 
error will be reported.
 
 @vtable @option
 
@@ -7586,9 +5700,9 @@ positive value), an error will be reported.
 The value is read as an integer.
 
 @item FLT
-The value is read as a float. There are generally two types, depending
-on the context. If they are for fractions, they will have to be less
-than or equal to unity.
+The value is read as a float.
+There are generally two types, depending on the context.
+If they are for fractions, they will have to be less than or equal to unity.
 
 @item STR
 The value is read as a string of characters (for example a file name)
@@ -7599,96 +5713,66 @@ or other particular settings like a HDU name, see below.
 @noindent
 @cindex Values to options
 @cindex Option values
-To specify a value in the short format, simply put the value after the
-option. Note that since the short options are only one character long,
-you don't have to type anything between the option and its value. For
-the long option you either need white space or an @option{=} sign, for
-example @option{-h2}, @option{-h 2}, @option{--hdu 2} or
-@option{--hdu=2} are all equivalent.
-
-The short format of on/off options (those that don't need values) can be
-concatenated for example these two hypothetical sequences of options are
-equivalent: @option{-a -b -c4} and @option{-abc4}.  As an example, consider
-the following command to run Crop:
+To specify a value in the short format, simply put the value after the option.
+Note that since the short options are only one character long, you don't have 
to type anything between the option and its value.
+For the long option you either need white space or an @option{=} sign, for 
example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are 
all equivalent.
+
+The short format of on/off options (those that don't need values) can be 
concatenated for example these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
+As an example, consider the following command to run Crop:
 @example
 $ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
 @end example
 @noindent
-The @command{$} is the shell prompt, @command{astcrop} is the
-program name. There are two arguments (@command{catalog.txt} and
-@command{ASTRdata}) and four options, two of them given in short
-format (@option{-D}, @option{-r}) and two in long format
-(@option{--width} and @option{--deccol}). Three of them require a
-value and one (@option{-D}) is an on/off option.
+The @command{$} is the shell prompt, @command{astcrop} is the program name.
+There are two arguments (@command{catalog.txt} and @command{ASTRdata}) and 
four options, two of them given in short format (@option{-D}, @option{-r}) and 
two in long format (@option{--width} and @option{--deccol}).
+Three of them require a value and one (@option{-D}) is an on/off option.
 
 @vindex --printparams
 @cindex Options, abbreviation
 @cindex Long option abbreviation
-If an abbreviation is unique between all the options of a program, the
-long option names can be abbreviated. For example, instead of typing
-@option{--printparams}, typing @option{--print} or maybe even
-@option{--pri} will be enough, if there are conflicts, the program
-will warn you and show you the alternatives. Finally, if you want the
-argument parser to stop parsing arguments beyond a certain point, you
-can use two dashes: @option{--}. No text on the command-line beyond
-these two dashes will be parsed.
+If an abbreviation is unique between all the options of a program, the long 
option names can be abbreviated.
+For example, instead of typing @option{--printparams}, typing @option{--print} 
or maybe even @option{--pri} will be enough, if there are conflicts, the 
program will warn you and show you the alternatives.
+Finally, if you want the argument parser to stop parsing arguments beyond a 
certain point, you can use two dashes: @option{--}.
+No text on the command-line beyond these two dashes will be parsed.
 
 @cindex Repeated options
 @cindex Options, repeated
-Gnuastro has two types of options with values, those that only take a
-single value are the most common type. If these options are repeated or
-called more than once on the command-line, the value of the last time it
-was called will be assigned to it. This is very useful when you are
-testing/experimenting. Let's say you want to make a small modification to
-one option value. You can simply type the option with a new value in the
-end of the command and see how the script works. If you are satisfied with
-the change, you can remove the original option for human readability. If
-the change wasn't satisfactory, you can remove the one you just added and
-not worry about forgetting the original value. Without this capability, you
-would have to memorize or save the original value somewhere else, run the
-command and then change the value again which is not at all convenient and
-is potentially cause lots of bugs.
-
-On the other hand, some options can be called multiple times in one run of
-a program and can thus take multiple values (for example see the
-@option{--column} option in @ref{Invoking asttable}. In these cases, the
-order of stored values is the same order that you specified on the
-command-line.
+Gnuastro has two types of options with values, those that only take a single 
value are the most common type.
+If these options are repeated or called more than once on the command-line, 
the value of the last time it was called will be assigned to it.
+This is very useful when you are testing/experimenting.
+Let's say you want to make a small modification to one option value.
+You can simply type the option with a new value in the end of the command and 
see how the script works.
+If you are satisfied with the change, you can remove the original option for 
human readability.
+If the change wasn't satisfactory, you can remove the one you just added and 
not worry about forgetting the original value.
+Without this capability, you would have to memorize or save the original value 
somewhere else, run the command and then change the value again which is not at 
all convenient and is potentially cause lots of bugs.
+
+On the other hand, some options can be called multiple times in one run of a 
program and can thus take multiple values (for example see the 
@option{--column} option in @ref{Invoking asttable}.
+In these cases, the order of stored values is the same order that you 
specified on the command-line.
 
 @cindex Configuration files
 @cindex Default option values
-Gnuastro's programs don't keep any internal default values, so some options
-are mandatory and if they don't have a value, the program will complain and
-abort. Most programs have many such options and typing them by hand on
-every call is impractical. To facilitate the user experience, after parsing
-the command-line, Gnuastro's programs read special configuration files to
-get the necessary values for the options you haven't identified on the
-command-line. These configuration files are fully described in
-@ref{Configuration files}.
+Gnuastro's programs don't keep any internal default values, so some options 
are mandatory and if they don't have a value, the program will complain and 
abort.
+Most programs have many such options and typing them by hand on every call is 
impractical.
+To facilitate the user experience, after parsing the command-line, Gnuastro's 
programs read special configuration files to get the necessary values for the 
options you haven't identified on the command-line.
+These configuration files are fully described in @ref{Configuration files}.
 
 @cartouche
 @noindent
 @cindex Tilde expansion as option values
-@strong{CAUTION:} In specifying a file address, if you want to use the
-shell's tilde expansion (@command{~}) to specify your home directory,
-leave at least one space between the option name and your value. For
-example use @command{-o ~/test}, @command{--output ~/test} or
-@command{--output= ~/test}. Calling them with @command{-o~/test} or
-@command{--output=~/test} will disable shell expansion.
+@strong{CAUTION:} In specifying a file address, if you want to use the shell's 
tilde expansion (@command{~}) to specify your home directory, leave at least 
one space between the option name and your value.
+For example use @command{-o ~/test}, @command{--output ~/test} or 
@command{--output= ~/test}.
+Calling them with @command{-o~/test} or @command{--output=~/test} will disable 
shell expansion.
 @end cartouche
 @cartouche
 @noindent
-@strong{CAUTION:} If you forget to specify a value for an option which
-requires one, and that option is the last one, Gnuastro will warn you. But
-if it is in the middle of the command, it will take the text of the next
-option or argument as the value which can cause undefined behavior.
+@strong{CAUTION:} If you forget to specify a value for an option which 
requires one, and that option is the last one, Gnuastro will warn you.
+But if it is in the middle of the command, it will take the text of the next 
option or argument as the value which can cause undefined behavior.
 @end cartouche
 @cartouche
 @noindent
 @cindex Counting from zero.
-@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in
-others 1. You can assume by default that counting starts from 1, if it
-starts from 0 for a special option, it will be explicitly mentioned.
+@strong{NOTE:} In some contexts Gnuastro's counting starts from 0 and in 
others 1.
+You can assume by default that counting starts from 1, if it starts from 0 for 
a special option, it will be explicitly mentioned.
 @end cartouche
 
 @node Common options, Standard input, Arguments and options, Command-line
@@ -7696,14 +5780,10 @@ starts from 0 for a special option, it will be 
explicitly mentioned.
 
 @cindex Options common to all programs
 @cindex Gnuastro common options
-To facilitate the job of the users and developers, all the programs in
-Gnuastro share some basic command-line options for the options that are
-common to many of the programs. The full list is classified as @ref{Input
-output options}, @ref{Processing options}, and @ref{Operating mode
-options}. In some programs, some of the options are irrelevant, but still
-recognized (you won't get an unrecognized option error, but the value isn't
-used). Unless otherwise mentioned, these options are identical between all
-programs.
+To facilitate the job of the users and developers, all the programs in 
Gnuastro share some basic command-line options for the options that are common 
to many of the programs.
+The full list is classified as @ref{Input output options}, @ref{Processing 
options}, and @ref{Operating mode options}.
+In some programs, some of the options are irrelevant, but still recognized 
(you won't get an unrecognized option error, but the value isn't used).
+Unless otherwise mentioned, these options are identical between all programs.
 
 @menu
 * Input output options::        Common input/output options.
@@ -7722,110 +5802,78 @@ programs.
 @cindex Timeout
 @cindex Standard input
 @item --stdintimeout
-Number of micro-seconds to wait for writing/typing in the @emph{first line}
-of standard input from the command-line (see @ref{Standard input}). This is
-only relevant for programs that also accept input from the standard input,
-@emph{and} you want to manually write/type the contents on the
-terminal. When the standard input is already connected to a pipe (output of
-another program), there won't be any waiting (hence no timeout, thus making
-this option redundant).
-
-If the first line-break (for example with the @key{ENTER} key) is not
-provided before the timeout, the program will abort with an error that no
-input was given. Note that this time interval is @emph{only} for the first
-line that you type. Once the first line is given, the program will assume
-that more data will come and accept rest of your inputs without any time
-limit. You need to specify the ending of the standard input, for example by
-pressing @key{CTRL-D} after a new line.
-
-Note that any input you write/type into a program on the command-line with
-Standard input will be discarded (lost) once the program is finished. It is
-only recoverable manually from your command-line (where you actually typed)
-as long as the terminal is open. So only use this feature when you are sure
-that you don't need the dataset (or have a copy of it somewhere else).
+Number of micro-seconds to wait for writing/typing in the @emph{first line} of 
standard input from the command-line (see @ref{Standard input}).
+This is only relevant for programs that also accept input from the standard 
input, @emph{and} you want to manually write/type the contents on the terminal.
+When the standard input is already connected to a pipe (output of another 
program), there won't be any waiting (hence no timeout, thus making this option 
redundant).
+
+If the first line-break (for example with the @key{ENTER} key) is not provided 
before the timeout, the program will abort with an error that no input was 
given.
+Note that this time interval is @emph{only} for the first line that you type.
+Once the first line is given, the program will assume that more data will come 
and accept rest of your inputs without any time limit.
+You need to specify the ending of the standard input, for example by pressing 
@key{CTRL-D} after a new line.
+
+Note that any input you write/type into a program on the command-line with 
Standard input will be discarded (lost) once the program is finished.
+It is only recoverable manually from your command-line (where you actually 
typed) as long as the terminal is open.
+So only use this feature when you are sure that you don't need the dataset (or 
have a copy of it somewhere else).
 
 
 @cindex HDU
 @cindex Header data unit
 @item -h STR/INT
 @itemx --hdu=STR/INT
-The name or number of the desired Header Data Unit, or HDU, in the FITS
-image. A FITS file can store multiple HDUs or extensions, each with either
-an image or a table or nothing at all (only a header). Note that counting
-of the extensions starts from 0(zero), not 1(one). Counting from 0 is
-forced on us by CFITSIO which directly reads the value you give with this
-option (see @ref{CFITSIO}). When specifying the name, case is not important
-so @command{IMAGE}, @command{image} or @command{ImAgE} are equivalent.
-
-CFITSIO has many capabilities to help you find the extension you want, far
-beyond the simple extension number and name. See CFITSIO manual's ``HDU
-Location Specification'' section for a very complete explanation with
-several examples. A @code{#} is appended to the string you specify for the
-HDU@footnote{With the @code{#} character, CFITSIO will only read the
-desired HDU into your memory, not all the existing HDUs in the fits file.}
-and the result is put in square brackets and appended to the FITS file name
-before calling CFITSIO to read the contents of the HDU for all the programs
-in Gnuastro.
+The name or number of the desired Header Data Unit, or HDU, in the FITS image.
+A FITS file can store multiple HDUs or extensions, each with either an image 
or a table or nothing at all (only a header).
+Note that counting of the extensions starts from 0(zero), not 1(one).
+Counting from 0 is forced on us by CFITSIO which directly reads the value you 
give with this option (see @ref{CFITSIO}).
+When specifying the name, case is not important so @command{IMAGE}, 
@command{image} or @command{ImAgE} are equivalent.
+
+CFITSIO has many capabilities to help you find the extension you want, far 
beyond the simple extension number and name.
+See CFITSIO manual's ``HDU Location Specification'' section for a very 
complete explanation with several examples.
+A @code{#} is appended to the string you specify for the HDU@footnote{With the 
@code{#} character, CFITSIO will only read the desired HDU into your memory, 
not all the existing HDUs in the fits file.} and the result is put in square 
brackets and appended to the FITS file name before calling CFITSIO to read the 
contents of the HDU for all the programs in Gnuastro.
 
 @item -s STR
 @itemx --searchin=STR
-Where to match/search for columns when the column identifier wasn't a
-number, see @ref{Selecting table columns}. The acceptable values are
-@command{name}, @command{unit}, or @command{comment}. This option is only
-relevant for programs that take table columns as input.
+Where to match/search for columns when the column identifier wasn't a number, 
see @ref{Selecting table columns}.
+The acceptable values are @command{name}, @command{unit}, or @command{comment}.
+This option is only relevant for programs that take table columns as input.
 
 @item -I
 @itemx --ignorecase
-Ignore case while matching/searching column meta-data (in the field
-specified by the @option{--searchin}). The FITS standard suggests to treat
-the column names as case insensitive, which is strongly recommended here
-also but is not enforced. This option is only relevant for programs that
-take table columns as input.
+Ignore case while matching/searching column meta-data (in the field specified 
by the @option{--searchin}).
+The FITS standard suggests to treat the column names as case insensitive, 
which is strongly recommended here also but is not enforced.
+This option is only relevant for programs that take table columns as input.
 
-This option is not relevant to @ref{BuildProgram}, hence in that program the
-short option @option{-I} is used for include directories, not to ignore
-case.
+This option is not relevant to @ref{BuildProgram}, hence in that program the 
short option @option{-I} is used for include directories, not to ignore case.
 
 @item -o STR
 @itemx --output=STR
-The name of the output file or directory. With this option the automatic
-output names explained in @ref{Automatic output} are ignored.
+The name of the output file or directory. With this option the automatic 
output names explained in @ref{Automatic output} are ignored.
 
 @item -T STR
 @itemx --type=STR
-The data type of the output depending on the program context. This option
-isn't applicable to some programs like @ref{Fits} and will be ignored by
-them. The different acceptable values to this option are fully described in
-@ref{Numeric data types}.
+The data type of the output depending on the program context.
+This option isn't applicable to some programs like @ref{Fits} and will be 
ignored by them.
+The different acceptable values to this option are fully described in 
@ref{Numeric data types}.
 
 @item -D
 @itemx --dontdelete
-By default, if the output file already exists, Gnuastro's programs will
-silently delete it and put their own outputs in its place. When this option
-is activated, if the output file already exists, the programs will not
-delete it, will warn you, and will abort.
+By default, if the output file already exists, Gnuastro's programs will 
silently delete it and put their own outputs in its place.
+When this option is activated, if the output file already exists, the programs 
will not delete it, will warn you, and will abort.
 
 @item -K
 @itemx --keepinputdir
-In automatic output names, don't remove the directory information of the
-input file names. As explained in @ref{Automatic output}, if no output name
-is specified (with @option{--output}), then the output name will be made in
-the existing directory based on your input's file name (ignoring the
-directory of the input). If you call this option, the directory information
-of the input will be kept and the automatically generated output name will
-be in the same directory as the input (usually with a suffix added). Note
-that his is only relevant if you are running the program in a different
-directory than the input data.
+In automatic output names, don't remove the directory information of the input 
file names.
+As explained in @ref{Automatic output}, if no output name is specified (with 
@option{--output}), then the output name will be made in the existing directory 
based on your input's file name (ignoring the directory of the input).
+If you call this option, the directory information of the input will be kept 
and the automatically generated output name will be in the same directory as 
the input (usually with a suffix added).
+Note that his is only relevant if you are running the program in a different 
directory than the input data.
 
 @item -t STR
 @itemx --tableformat=STR
-The output table's type. This option is only relevant when the output is a
-table and its format cannot be deduced from its filename. For example, if a
-name ending in @file{.fits} was given to @option{--output}, then the
-program knows you want a FITS table. But there are two types of FITS
-tables: FITS ASCII, and FITS binary. Thus, with this option, the program is
-able to identify which type you want. The currently recognized values to
-this option are:
+The output table's type.
+This option is only relevant when the output is a table and its format cannot 
be deduced from its filename.
+For example, if a name ending in @file{.fits} was given to @option{--output}, 
then the program knows you want a FITS table.
+But there are two types of FITS tables: FITS ASCII, and FITS binary.
+Thus, with this option, the program is able to identify which type you want.
+The currently recognized values to this option are:
 
 @table @command
 @item txt
@@ -7843,66 +5891,44 @@ A FITS binary table (see @ref{Recognized table 
formats}).
 @node Processing options, Operating mode options, Input output options, Common 
options
 @subsubsection Processing options
 
-Some processing steps are common to several programs, so they are defined
-as common options to all programs. Note that this class of common options
-is thus necessarily less common between all the programs than those
-described in @ref{Input output options}, or @ref{Operating mode options}
-options. Also, if they are irrelevant for a program, these options will not
-display in the @option{--help} output of the program.
+Some processing steps are common to several programs, so they are defined as 
common options to all programs.
+Note that this class of common options is thus necessarily less common between 
all the programs than those described in @ref{Input output options}, or 
@ref{Operating mode options} options.
+Also, if they are irrelevant for a program, these options will not display in 
the @option{--help} output of the program.
 
 @table @option
 
 @item --minmapsize=INT
-The minimum size (in bytes) to store the contents of each main processing
-array of a program as a file (on the non-volatile HDD/SSD), not in
-RAM. This can be very useful when you have limited RAM, but need to process
-large datasets which can be very memory intensive. In such scenarios,
-without this option, the program will crash.
-
-A random filename is assigned to the array. This file will keep the
-contents of the array as long as it is necessary and the program will
-delete it as soon as its not necessary any more.
-
-If the @file{.gnuastro} directory exists and is writable, then the random
-file will be placed in there. Otherwise, the @file{.gnuastro_mmap}
-directory will be checked. If @file{.gnuastro_mmap} does not exist, or
-@file{.gnuastro} is not writable, the random file will be directly written
-in the current directory with the @file{.gnuastro_mmap_} prefix.
-
-By default, the name of the created file, and its size (in bytes) is
-printed by the program when it is created and later, when its
-deleted/freed. These messages are useful to the user who has enough RAM,
-but has forgot to increase the value to @code{--minmapsize} (this is often
-the case). To suppress/disable such messages, use the @code{--quietmmap}
-option.
+The minimum size (in bytes) to store the contents of each main processing 
array of a program as a file (on the non-volatile HDD/SSD), not in RAM.
+This can be very useful when you have limited RAM, but need to process large 
datasets which can be very memory intensive.
+In such scenarios, without this option, the program will crash.
+
+A random filename is assigned to the array.
+This file will keep the contents of the array as long as it is necessary and 
the program will delete it as soon as its not necessary any more.
+
+If the @file{.gnuastro} directory exists and is writable, then the random file 
will be placed in there.
+Otherwise, the @file{.gnuastro_mmap} directory will be checked.
+If @file{.gnuastro_mmap} does not exist, or @file{.gnuastro} is not writable, 
the random file will be directly written in the current directory with the 
@file{.gnuastro_mmap_} prefix.
+
+By default, the name of the created file, and its size (in bytes) is printed 
by the program when it is created and later, when its deleted/freed.
+These messages are useful to the user who has enough RAM, but has forgot to 
increase the value to @code{--minmapsize} (this is often the case).
+To suppress/disable such messages, use the @code{--quietmmap} option.
 
-When this option has a value of @code{0} (zero, strongly discouraged, see
-box below), all arrays that use this feature in a program will actually be
-placed in a file (not in RAM). When this option is larger than all the
-input datasets, all arrays will be definitely allocated in RAM and the
-program will run MUCH faster.
-
-Please note that using a non-volatile file (in the HDD/SDD) instead of RAM
-can significantly increase the program's running time, especially on HDDs
-(where read/write is slower). So it is best to give this option large
-values by default. You can then decrease it for a specific program's
-invocation on a large input after you see memory issues arise (for example
-an error, or the program not aborting and fully consuming your memory).
-
-The random file will be deleted once it is no longer needed by the
-program. The @file{.gnuastro} directory will also be deleted if it has no
-other contents (you may also have configuration files in this directory,
-see @ref{Configuration files}). If you see randomly named files remaining
-in this directory when the program finishes normally, please send us a bug
-report so we address the problem, see @ref{Report a bug}.
+When this option has a value of @code{0} (zero, strongly discouraged, see box 
below), all arrays that use this feature in a program will actually be placed 
in a file (not in RAM).
+When this option is larger than all the input datasets, all arrays will be 
definitely allocated in RAM and the program will run MUCH faster.
+
+Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can 
significantly increase the program's running time, especially on HDDs (where 
read/write is slower).
+So it is best to give this option large values by default.
+You can then decrease it for a specific program's invocation on a large input 
after you see memory issues arise (for example an error, or the program not 
aborting and fully consuming your memory).
+
+The random file will be deleted once it is no longer needed by the program.
+The @file{.gnuastro} directory will also be deleted if it has no other 
contents (you may also have configuration files in this directory, see 
@ref{Configuration files}).
+If you see randomly named files remaining in this directory when the program 
finishes normally, please send us a bug report so we address the problem, see 
@ref{Report a bug}.
 
 @cartouche
 @noindent
-@strong{Limited number of memory-mapped files:} The operating system
-kernels usually support a limited number of memory-mapped files. Therefore
-never set @code{--minmapsize} to zero or a small number of bytes (so too
-many files are created). If the kernel capacity is exceeded, the program
-will crash.
+@strong{Limited number of memory-mapped files:} The operating system kernels 
usually support a limited number of memory-mapped files.
+Therefore never set @code{--minmapsize} to zero or a small number of bytes (so 
too many files are created).
+If the kernel capacity is exceeded, the program will crash.
 @end cartouche
 
 @item --quietmmap
@@ -7912,70 +5938,47 @@ for more.
 
 @item -Z INT[,INT[,...]]
 @itemx --tilesize=[,INT[,...]]
-The size of regular tiles for tessellation, see @ref{Tessellation}. For
-each dimension an integer length (in units of data-elements or pixels) is
-necessary. If the number of input dimensions is different from the number
-of values given to this option, the program will stop with an error. Values
-must be separated by commas (@key{,}) and can also be fractions (for
-example @code{4/2}). If they are fractions, the result must be an integer,
-otherwise an error will be printed.
+The size of regular tiles for tessellation, see @ref{Tessellation}.
+For each dimension an integer length (in units of data-elements or pixels) is 
necessary.
+If the number of input dimensions is different from the number of values given 
to this option, the program will stop with an error.
+Values must be separated by commas (@key{,}) and can also be fractions (for 
example @code{4/2}).
+If they are fractions, the result must be an integer, otherwise an error will 
be printed.
 
 @item -M INT[,INT[,...]]
 @itemx --numchannels=INT[,INT[,...]]
-The number of channels for larger input tessellation, see
-@ref{Tessellation}. The number and types of acceptable values are similar
-to @option{--tilesize}. The only difference is that instead of length, the
-integers values given to this option represent the @emph{number} of
-channels, not their size.
+The number of channels for larger input tessellation, see @ref{Tessellation}.
+The number and types of acceptable values are similar to @option{--tilesize}.
+The only difference is that instead of length, the integers values given to 
this option represent the @emph{number} of channels, not their size.
 
 @item -F FLT
 @itemx --remainderfrac=FLT
-The fraction of remainder size along all dimensions to add to the first
-tile. See @ref{Tessellation} for a complete description. This option is
-only relevant if @option{--tilesize} is not exactly divisible by the input
-dataset's size in a dimension. If the remainder size is larger than this
-fraction (compared to @option{--tilesize}), then the remainder size will be
-added with one regular tile size and divided between two tiles at the start
-and end of the given dimension.
+The fraction of remainder size along all dimensions to add to the first tile.
+See @ref{Tessellation} for a complete description.
+This option is only relevant if @option{--tilesize} is not exactly divisible 
by the input dataset's size in a dimension.
+If the remainder size is larger than this fraction (compared to 
@option{--tilesize}), then the remainder size will be added with one regular 
tile size and divided between two tiles at the start and end of the given 
dimension.
 
 @item --workoverch
-Ignore the channel borders for the high-level job of the given
-application. As a result, while the channel borders are respected in
-defining the small tiles (such that no tile will cross a channel border),
-the higher-level program operation will ignore them, see
-@ref{Tessellation}.
+Ignore the channel borders for the high-level job of the given application.
+As a result, while the channel borders are respected in defining the small 
tiles (such that no tile will cross a channel border), the higher-level program 
operation will ignore them, see @ref{Tessellation}.
 
 @item --checktiles
-Make a FITS file with the same dimensions as the input but each pixel is
-replaced with the ID of the tile that it is associated with. Note that the
-tile IDs start from 0. See @ref{Tessellation} for more on Tiling an image
-in Gnuastro.
+Make a FITS file with the same dimensions as the input but each pixel is 
replaced with the ID of the tile that it is associated with.
+Note that the tile IDs start from 0.
+See @ref{Tessellation} for more on Tiling an image in Gnuastro.
 
 @item --oneelempertile
-When showing the tile values (for example with @option{--checktiles}, or
-when the program's output is tessellated) only use one element for each
-tile. This can be useful when only the relative values given to each tile
-compared to the rest are important or need to be checked. Since the tiles
-usually have a large number of pixels within them the output will be much
-smaller, and so easier to read, write, store, or send.
-
-Note that when the full input size in any dimension is not exactly
-divisible by the given @option{--tilesize} in that dimension, the edge
-tile(s) will have different sizes (in units of the input's size), see
-@option{--remainderfrac}. But with this option, all displayed values are
-going to have the (same) size of one data-element. Hence, in such cases,
-the image proportions are going to be slightly different with this
-option.
+When showing the tile values (for example with @option{--checktiles}, or when 
the program's output is tessellated) only use one element for each tile.
+This can be useful when only the relative values given to each tile compared 
to the rest are important or need to be checked.
+Since the tiles usually have a large number of pixels within them the output 
will be much smaller, and so easier to read, write, store, or send.
+
+Note that when the full input size in any dimension is not exactly divisible 
by the given @option{--tilesize} in that dimension, the edge tile(s) will have 
different sizes (in units of the input's size), see @option{--remainderfrac}.
+But with this option, all displayed values are going to have the (same) size 
of one data-element.
+Hence, in such cases, the image proportions are going to be slightly different 
with this option.
 
-If your input image is not exactly divisible by the tile size and you want
-one value per tile for some higher-level processing, all is not lost
-though. You can see how many pixels were within each tile (for example to
-weight the values or discard some for later processing) with Gnuastro's
-Statistics (see @ref{Statistics}) as shown below. The output FITS file is
-going to have two extensions, one with the median calculated on each tile
-and one with the number of elements that each tile covers. You can then use
-the @code{where} operator in @ref{Arithmetic} to set the values of all
-tiles that don't have the regular area to a blank value.
+If your input image is not exactly divisible by the tile size and you want one 
value per tile for some higher-level processing, all is not lost though.
+You can see how many pixels were within each tile (for example to weight the 
values or discard some for later processing) with Gnuastro's Statistics (see 
@ref{Statistics}) as shown below.
+The output FITS file is going to have two extensions, one with the median 
calculated on each tile and one with the number of elements that each tile 
covers.
+You can then use the @code{where} operator in @ref{Arithmetic} to set the 
values of all tiles that don't have the regular area to a blank value.
 
 @example
 $ aststatistics --median --number --ontile input.fits    \
@@ -7999,16 +6002,13 @@ elements, keep the non-blank elements untouched.
 @cindex Taxicab metric
 @cindex Manhattan metric
 @cindex Metric: Manhattan, Taxicab, Radial
-The metric to use for finding nearest neighbors. Currently it only accepts
-the Manhattan (or taxicab) metric with @code{manhattan}, or the radial
-metric with @code{radial}.
+The metric to use for finding nearest neighbors.
+Currently it only accepts the Manhattan (or taxicab) metric with 
@code{manhattan}, or the radial metric with @code{radial}.
 
-The Manhattan distance between two points is defined with
-@mymath{|\Delta{x}|+|\Delta{y}|}. Thus the Manhattan metric has the
-advantage of being fast, but at the expense of being less accurate. The
-radial distance is the standard definition of distance in a Euclidean
-space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}. It is accurate, but the
-multiplication and square root can slow down the processing.
+The Manhattan distance between two points is defined with 
@mymath{|\Delta{x}|+|\Delta{y}|}.
+Thus the Manhattan metric has the advantage of being fast, but at the expense 
of being less accurate.
+The radial distance is the standard definition of distance in a Euclidean 
space: @mymath{\sqrt{\Delta{x}^2+\Delta{y}^2}}.
+It is accurate, but the multiplication and square root can slow down the 
processing.
 
 @item --interpnumngb=INT
 The number of nearby non-blank neighbors to use for interpolation.
@@ -8017,145 +6017,97 @@ The number of nearby non-blank neighbors to use for 
interpolation.
 @node Operating mode options,  , Processing options, Common options
 @subsubsection Operating mode options
 
-Another group of options that are common to all the programs in Gnuastro
-are those to do with the general operation of the programs. The explanation
-for those that are not only limited to Gnuastro but are common to all GNU
-programs start with (GNU option).
+Another group of options that are common to all the programs in Gnuastro are 
those to do with the general operation of the programs.
+The explanation for those that are not only limited to Gnuastro but are common 
to all GNU programs start with (GNU option).
 
 @vtable @option
 
 @item --
-(GNU option) Stop parsing the command-line. This option can be useful in
-scripts or when using the shell history. Suppose you have a long list of
-options, and want to see if removing some of them (to read from
-configuration files, see @ref{Configuration files}) can give a better
-result. If the ones you want to remove are the last ones on the
-command-line, you don't have to delete them, you can just add @option{--}
-before them and if you don't get what you want, you can remove the
-@option{--} and get the same initial result.
+(GNU option) Stop parsing the command-line.
+This option can be useful in scripts or when using the shell history.
+Suppose you have a long list of options, and want to see if removing some of 
them (to read from configuration files, see @ref{Configuration files}) can give 
a better result.
+If the ones you want to remove are the last ones on the command-line, you 
don't have to delete them, you can just add @option{--} before them and if you 
don't get what you want, you can remove the @option{--} and get the same 
initial result.
 
 @item --usage
-(GNU option) Only print the options and arguments and abort. This is very
-useful for when you know the what the options do, and have just forgot
-their long/short identifiers, see @ref{--usage}.
+(GNU option) Only print the options and arguments and abort.
+This is very useful for when you know the what the options do, and have just 
forgot their long/short identifiers, see @ref{--usage}.
 
 @item -?
 @itemx --help
-(GNU option) Print all options with an explanation and abort. Adding this
-option will print all the options in their short and long formats, also
-displaying which ones need a value if they are called (with an @option{=}
-after the long format followed by a string specifying the format, see
-@ref{Options}). A short explanation is also given for what the option is
-for. The program will quit immediately after the message is printed and
-will not do any form of processing, see @ref{--help}.
+(GNU option) Print all options with an explanation and abort.
+Adding this option will print all the options in their short and long formats, 
also displaying which ones need a value if they are called (with an @option{=} 
after the long format followed by a string specifying the format, see 
@ref{Options}).
+A short explanation is also given for what the option is for.
+The program will quit immediately after the message is printed and will not do 
any form of processing, see @ref{--help}.
 
 @item -V
 @itemx --version
-(GNU option) Print a short message, showing the full name, version,
-copyright information and program authors and abort. On the first line, it
-will print the official name (not executable name) and version number of
-the program. Following this is a blank line and a copyright
-information. The program will not run.
+(GNU option) Print a short message, showing the full name, version, copyright 
information and program authors and abort.
+On the first line, it will print the official name (not executable name) and 
version number of the program.
+Following this is a blank line and a copyright information.
+The program will not run.
 
 @item -q
 @itemx --quiet
-Don't report steps. All the programs in Gnuastro that have multiple major
-steps will report their steps for you to follow while they are
-operating. If you do not want to see these reports, you can call this
-option and only error/warning messages will be printed. If the steps are
-done very fast (depending on the properties of your input) disabling these
-reports will also decrease running time.
+Don't report steps.
+All the programs in Gnuastro that have multiple major steps will report their 
steps for you to follow while they are operating.
+If you do not want to see these reports, you can call this option and only 
error/warning messages will be printed.
+If the steps are done very fast (depending on the properties of your input) 
disabling these reports will also decrease running time.
 
 @item --cite
-Print all necessary information to cite and acknowledge Gnuastro in your
-published papers. With this option, the programs will print the Bib@TeX{}
-entry to include in your paper for Gnuastro in general, and the particular
-program's paper (if that program comes with a separate paper). It will also
-print the necessary acknowledgment statement to add in the respective
-section of your paper and it will abort. For a more complete explanation,
-please see @ref{Acknowledgments}.
-
-Citations and acknowledgments are vital for the continued work on
-Gnuastro. Gnuastro started, and is continued, based on separate research
-projects. So if you find any of the tools offered in Gnuastro to be useful
-in your research, please use the output of this command to cite and
-acknowledge the program (and Gnuastro) in your research paper. Thank you.
-
-Gnuastro is still new, there is no separate paper only devoted to Gnuastro
-yet. Therefore currently the paper to cite for Gnuastro is the paper for
-NoiseChisel which is the first published paper introducing Gnuastro to the
-astronomical community. Upon reaching a certain point, a paper completely
-devoted to describing Gnuastro's many functionalities will be published,
-see @ref{GNU Astronomy Utilities 1.0}.
+Print all necessary information to cite and acknowledge Gnuastro in your 
published papers.
+With this option, the programs will print the Bib@TeX{} entry to include in 
your paper for Gnuastro in general, and the particular program's paper (if that 
program comes with a separate paper).
+It will also print the necessary acknowledgment statement to add in the 
respective section of your paper and it will abort.
+For a more complete explanation, please see @ref{Acknowledgments}.
+
+Citations and acknowledgments are vital for the continued work on Gnuastro.
+Gnuastro started, and is continued, based on separate research projects.
+So if you find any of the tools offered in Gnuastro to be useful in your 
research, please use the output of this command to cite and acknowledge the 
program (and Gnuastro) in your research paper.
+Thank you.
+
+Gnuastro is still new, there is no separate paper only devoted to Gnuastro yet.
+Therefore currently the paper to cite for Gnuastro is the paper for 
NoiseChisel which is the first published paper introducing Gnuastro to the 
astronomical community.
+Upon reaching a certain point, a paper completely devoted to describing 
Gnuastro's many functionalities will be published, see @ref{GNU Astronomy 
Utilities 1.0}.
 
 @item -P
 @itemx --printparams
-With this option, Gnuastro's programs will read your command-line options
-and all the configuration files. If there is no problem (like a missing
-parameter or a value in the wrong format or range) and immediately before
-actually running, the programs will print the full list of option names,
-values and descriptions, sorted and grouped by context and abort. They will
-also report the version number, the date they were configured on your
-system and the time they were reported.
-
-As an example, you can give your full command-line options and even the
-input and output file names and finally just add @option{-P} to check if
-all the parameters are finely set. If everything is OK, you can just run
-the same command (easily retrieved from the shell history, with the top
-arrow key) and simply remove the last two characters that showed this
-option.
+With this option, Gnuastro's programs will read your command-line options and 
all the configuration files.
+If there is no problem (like a missing parameter or a value in the wrong 
format or range) and immediately before actually running, the programs will 
print the full list of option names, values and descriptions, sorted and 
grouped by context and abort.
+They will also report the version number, the date they were configured on 
your system and the time they were reported.
 
-Since no program will actually start its processing when this option
-is called, the otherwise mandatory arguments for each program (for
-example input image or catalog files) are no longer required when you
-call this option.
+As an example, you can give your full command-line options and even the input 
and output file names and finally just add @option{-P} to check if all the 
parameters are finely set.
+If everything is OK, you can just run the same command (easily retrieved from 
the shell history, with the top arrow key) and simply remove the last two 
characters that showed this option.
+
+No program will actually start its processing when this option is called.
+The otherwise mandatory arguments for each program (for example input image or 
catalog files) are no longer required when you call this option.
 
 @item --config=STR
-Parse @option{STR} as a configuration file immediately when this option is
-confronted (see @ref{Configuration files}). The @option{--config} option
-can be called multiple times in one run of any Gnuastro program on the
-command-line or in the configuration files. In any case, it will be
-immediately read (before parsing the rest of the options on the
-command-line, or lines in a configuration file).
-
-Note that by definition, options on the command-line still take precedence
-over those in any configuration file, including the file(s) given to this
-option if they are called before it. Also see @option{--lastconfig} and
-@option{--onlyversion} on how this option can be used for reproducible
-results. You can use @option{--checkconfig} (below) to check/confirm the
-parsing of configuration files.
+Parse @option{STR} as a configuration file immediately when this option is 
confronted (see @ref{Configuration files}).
+The @option{--config} option can be called multiple times in one run of any 
Gnuastro program on the command-line or in the configuration files.
+In any case, it will be immediately read (before parsing the rest of the 
options on the command-line, or lines in a configuration file).
+
+Note that by definition, options on the command-line still take precedence 
over those in any configuration file, including the file(s) given to this 
option if they are called before it.
+Also see @option{--lastconfig} and @option{--onlyversion} on how this option 
can be used for reproducible results.
+You can use @option{--checkconfig} (below) to check/confirm the parsing of 
configuration files.
 
 @item --checkconfig
-Print options and their values, within the command-line or configuration
-files, as they are parsed (see @ref{Configuration file precedence}). If an
-option has already been set, or is ignored by the program, this option will
-also inform you with special values like @code{--ALREADY-SET--}. Only
-options that are parsed after this option are printed, so to see the
-parsing of all input options, it is recommended to put this option
-immediately after the program name before any other options.
+Print options and their values, within the command-line or configuration 
files, as they are parsed (see @ref{Configuration file precedence}).
+If an option has already been set, or is ignored by the program, this option 
will also inform you with special values like @code{--ALREADY-SET--}.
+Only options that are parsed after this option are printed, so to see the 
parsing of all input options, it is recommended to put this option immediately 
after the program name before any other options.
 
 @cindex Debug
-This is a very good option to confirm where the value of each option is has
-been defined in scenarios where there are multiple configuration files (for
-debugging).
+This is a very good option to confirm where the value of each option is has 
been defined in scenarios where there are multiple configuration files (for 
debugging).
 
 @item -S
 @itemx --setdirconf
-Update the current directory configuration file for the Gnuastro program
-and quit. The full set of command-line and configuration file options will
-be parsed and options with a value will be written in the current directory
-configuration file for this program (see @ref{Configuration files}). If the
-configuration file or its directory doesn't exist, it will be created. If a
-configuration file exists it will be replaced (after it, and all other
-configuration files have been read). In any case, the program will not run.
-
-This is the recommended method@footnote{Alternatively, you can use your
-favorite text editor.} to edit/set the configuration file for all future
-calls to Gnuastro's programs. It will internally check if your values are
-in the correct range and type and save them according to the configuration
-file format, see @ref{Configuration file format}. So if there are
-unreasonable values to some options, the program will notify you and abort
-before writing the final configuration file.
+Update the current directory configuration file for the Gnuastro program and 
quit.
+The full set of command-line and configuration file options will be parsed and 
options with a value will be written in the current directory configuration 
file for this program (see @ref{Configuration files}).
+If the configuration file or its directory doesn't exist, it will be created.
+If a configuration file exists it will be replaced (after it, and all other 
configuration files have been read).
+In any case, the program will not run.
+
+This is the recommended method@footnote{Alternatively, you can use your 
favorite text editor.} to edit/set the configuration file for all future calls 
to Gnuastro's programs.
+It will internally check if your values are in the correct range and type and 
save them according to the configuration file format, see @ref{Configuration 
file format}.
+So if there are unreasonable values to some options, the program will notify 
you and abort before writing the final configuration file.
 
 When this option is called, the otherwise mandatory arguments, for
 example input image or catalog file(s), are no longer mandatory (since
@@ -8163,51 +6115,32 @@ the program will not run).
 
 @item -U
 @itemx --setusrconf
-Update the user configuration file and quit (see @ref{Configuration
-files}). See explanation under @option{--setdirconf} for more details.
+Update the user configuration file and quit (see @ref{Configuration files}).
+See explanation under @option{--setdirconf} for more details.
 
 @item --lastconfig
-This is the last configuration file that must be read. When this option is
-confronted in any stage of reading the options (on the command-line or in a
-configuration file), no other configuration file will be parsed, see
-@ref{Configuration file precedence} and @ref{Current directory and User
-wide}. Like all on/off options, on the command-line, this option doesn't
-take any values. But in a configuration file, it takes the values of
-@option{0} or @option{1}, see @ref{Configuration file format}. If it is
-present in a configuration file with a value of @option{0}, then all later
-occurrences of this option will be ignored.
+This is the last configuration file that must be read.
+When this option is confronted in any stage of reading the options (on the 
command-line or in a configuration file), no other configuration file will be 
parsed, see @ref{Configuration file precedence} and @ref{Current directory and 
User wide}.
+Like all on/off options, on the command-line, this option doesn't take any 
values.
+But in a configuration file, it takes the values of @option{0} or @option{1}, 
see @ref{Configuration file format}.
+If it is present in a configuration file with a value of @option{0}, then all 
later occurrences of this option will be ignored.
 
 
 @item --onlyversion=STR
-Only run the program if Gnuastro's version is exactly equal to @option{STR}
-(see @ref{Version numbering}). Note that it is not compared as a number,
-but as a string of characters, so @option{0}, or @option{0.0} and
-@option{0.00} are different. If the running Gnuastro version is different,
-then this option will report an error and abort as soon as it is
-confronted on the command-line or in a configuration file. If the running
-Gnuastro version is the same as @option{STR}, then the program will run as
-if this option was not called.
-
-This is useful if you want your results to be exactly reproducible and not
-mistakenly run with an updated/newer or older version of the
-program. Besides internal algorithmic/behavior changes in programs, the
-existence of options or their names might change between versions
-(especially in these earlier versions of Gnuastro).
-
-Hence, when using this option (probably in a script or in a configuration
-file), be sure to call it before other options. The benefit is that, when
-the version differs, the other options won't be parsed and you, or your
-collaborators/users, won't get errors saying an option in your
-configuration doesn't exist in the running version of the program.
-
-Here is one example of how this option can be used in conjunction with the
-@option{--lastconfig} option. Let's assume that you were satisfied with the
-results of this command: @command{astnoisechisel image.fits --snquant=0.95}
-(along with various options set in various configuration files). You can
-save the state of NoiseChisel and reproduce that exact result on
-@file{image.fits} later by following these steps (the extra spaces, and
-@key{\}, are only for easy readability, if you want to try it out, only one
-space between each token is enough).
+Only run the program if Gnuastro's version is exactly equal to @option{STR} 
(see @ref{Version numbering}).
+Note that it is not compared as a number, but as a string of characters, so 
@option{0}, or @option{0.0} and @option{0.00} are different.
+If the running Gnuastro version is different, then this option will report an 
error and abort as soon as it is confronted on the command-line or in a 
configuration file.
+If the running Gnuastro version is the same as @option{STR}, then the program 
will run as if this option was not called.
+
+This is useful if you want your results to be exactly reproducible and not 
mistakenly run with an updated/newer or older version of the program.
+Besides internal algorithmic/behavior changes in programs, the existence of 
options or their names might change between versions (especially in these 
earlier versions of Gnuastro).
+
+Hence, when using this option (probably in a script or in a configuration 
file), be sure to call it before other options.
+The benefit is that, when the version differs, the other options won't be 
parsed and you, or your collaborators/users, won't get errors saying an option 
in your configuration doesn't exist in the running version of the program.
+
+Here is one example of how this option can be used in conjunction with the 
@option{--lastconfig} option.
+Let's assume that you were satisfied with the results of this command: 
@command{astnoisechisel image.fits --snquant=0.95} (along with various options 
set in various configuration files).
+You can save the state of NoiseChisel and reproduce that exact result on 
@file{image.fits} later by following these steps (the extra spaces, and 
@key{\}, are only for easy readability, if you want to try it out, only one 
space between each token is enough).
 
 @example
 $ echo "onlyversion X.XX"             > reproducible.conf
@@ -8216,55 +6149,40 @@ $ astnoisechisel image.fits --snquant=0.95 -P           
 \
                                      >> reproducible.conf
 @end example
 
-@option{--onlyversion} was available from Gnuastro 0.0, so putting it
-immediately at the start of a configuration file will ensure that later,
-you (or others using different version) won't get a non-recognized option
-error in case an option was added/removed. @option{--lastconfig} will
-inform the installed NoiseChisel to not parse any other configuration
-files. This is done because we don't want the user's user-wide or system
-wide option values affecting our results. Finally, with the third command,
-which has a @option{-P} (short for @option{--printparams}), NoiseChisel
-will print all the option values visible to it (in all the configuration
-files) and the shell will append them to @file{reproduce.conf}. Hence, you
-don't have to worry about remembering the (possibly) different options in
-the different configuration files.
+@option{--onlyversion} was available from Gnuastro 0.0, so putting it 
immediately at the start of a configuration file will ensure that later, you 
(or others using different version) won't get a non-recognized option error in 
case an option was added/removed.
+@option{--lastconfig} will inform the installed NoiseChisel to not parse any 
other configuration files.
+This is done because we don't want the user's user-wide or system wide option 
values affecting our results.
+Finally, with the third command, which has a @option{-P} (short for 
@option{--printparams}), NoiseChisel will print all the option values visible 
to it (in all the configuration files) and the shell will append them to 
@file{reproduce.conf}.
+Hence, you don't have to worry about remembering the (possibly) different 
options in the different configuration files.
 
-Afterwards, if you run NoiseChisel as shown below (telling it to read this
-configuration file with the @file{--config} option). You can be sure that
-there will either be an error (for version mismatch) or it will produce
-exactly the same result that you got before.
+Afterwards, if you run NoiseChisel as shown below (telling it to read this 
configuration file with the @file{--config} option).
+You can be sure that there will either be an error (for version mismatch) or 
it will produce exactly the same result that you got before.
 
 @example
 $ astnoisechisel --config=reproducible.conf
 @end example
 
 @item --log
-Some programs can generate extra information about their outputs in a log
-file. When this option is called in those programs, the log file will also
-be printed. If the program doesn't generate a log file, this option is
-ignored.
+Some programs can generate extra information about their outputs in a log file.
+When this option is called in those programs, the log file will also be 
printed.
+If the program doesn't generate a log file, this option is ignored.
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed
-name. Therefore if two simultaneous calls (with @option{--log}) of a
-program are made in the same directory, the program will try to write to
-the same file. This will cause problems like unreasonable log file,
-undefined behavior, or a crash.
+@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed 
name.
+Therefore if two simultaneous calls (with @option{--log}) of a program are 
made in the same directory, the program will try to write to he same file.
+This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 @end cartouche
 
 @cindex CPU threads, set number
 @cindex Number of CPU threads to use
 @item -N INT
 @itemx --numthreads=INT
-Use @option{INT} CPU threads when running a Gnuastro program (see
-@ref{Multi-threaded operations}). If the value is zero (@code{0}), or this
-option is not given on the command-line or any configuration file, the
-value will be determined at run-time: the maximum number of threads
-available to the system when you run a Gnuastro program.
+Use @option{INT} CPU threads when running a Gnuastro program (see 
@ref{Multi-threaded operations}).
+If the value is zero (@code{0}), or this option is not given on the 
command-line or any configuration file, the value will be determined at 
run-time: the maximum number of threads available to the system when you run a 
Gnuastro program.
 
-Note that multi-threaded programming is only relevant to some programs. In
-others, this option will be ignored.
+Note that multi-threaded programming is only relevant to some programs.
+In others, this option will be ignored.
 
 @end vtable
 
@@ -8277,94 +6195,67 @@ others, this option will be ignored.
 
 @cindex Standard input
 @cindex Stream: standard input
-The most common way to feed the primary/first input dataset into a program
-is to give its filename as an argument (discussed in @ref{Arguments}). When
-you want to run a series of programs in sequence, this means that each will
-have to keep the output of each program in a separate file and re-type that
-file's name in the next command. This can be very slow and frustrating
-(mis-typing a file's name).
+The most common way to feed the primary/first input dataset into a program is 
to give its filename as an argument (discussed in @ref{Arguments}).
+When you want to run a series of programs in sequence, this means that each 
will have to keep the output of each program in a separate file and re-type 
that file's name in the next command.
+This can be very slow and frustrating (mis-typing a file's name).
 
 @cindex Standard output stream
 @cindex Stream: standard output
-To solve the problem, the founders of Unix defined pipes to directly feed
-the output of one program (its ``Standard output'' stream) into the
-``standard input'' of a next program. This removes the need to make
-temporary files between separate processes and became one of the best
-demonstrations of the Unix-way, or Unix philosophy.
-
-Every program has three streams identifying where it reads/writes non-file
-inputs/outputs: @emph{Standard input}, @emph{Standard output}, and
-@emph{Standard error}. When a program is called alone, all three are
-directed to the terminal that you are using. If it needs an input, it will
-prompt you for one and you can type it in. Or, it prints its results in the
-terminal for you to see.
-
-For example, say you have a FITS table/catalog containing the B and V band
-magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of
-galaxies along with many other columns. If you want to see only these two
-columns in your terminal, can use Gnuastro's @ref{Table} program like
-below:
+To solve the problem, the founders of Unix defined pipes to directly feed the 
output of one program (its ``Standard output'' stream) into the ``standard 
input'' of a next program.
+This removes the need to make temporary files between separate processes and 
became one of the best demonstrations of the Unix-way, or Unix philosophy.
+
+Every program has three streams identifying where it reads/writes non-file 
inputs/outputs: @emph{Standard input}, @emph{Standard output}, and 
@emph{Standard error}.
+When a program is called alone, all three are directed to the terminal that 
you are using.
+If it needs an input, it will prompt you for one and you can type it in.
+Or, it prints its results in the terminal for you to see.
+
+For example, say you have a FITS table/catalog containing the B and V band 
magnitudes (@code{MAG_B} and @code{MAG_V} columns) of a selection of galaxies 
along with many other columns.
+If you want to see only these two columns in your terminal, can use Gnuastro's 
@ref{Table} program like below:
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V
 @end example
 
-Through the Unix pipe mechanism, when the shell confronts the pipe
-character (@key{|}), it connects the standard output of the program before
-the pipe, to the standard input of the program after it. So it is literally
-a ``pipe'': everything that you would see printed by the first program on
-the command (without any pipe), is now passed to the second program (and
-not seen by you).
+Through the Unix pipe mechanism, when the shell confronts the pipe character 
(@key{|}), it connects the standard output of the program before the pipe, to 
the standard input of the program after it.
+So it is literally a ``pipe'': everything that you would see printed by the 
first program on the command (without any pipe), is now passed to the second 
program (and not seen by you).
 
 @cindex AWK
 @cindex GNU AWK
-To continue the previous example, let's say you want to see the B-V
-color. To do this, you can pipe Table's output to AWK (a wonderful tool for
-processing things like plain text tables):
+To continue the previous example, let's say you want to see the B-V color.
+To do this, you can pipe Table's output to AWK (a wonderful tool for 
processing things like plain text tables):
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}'
 @end example
 
-But understanding the distribution by visually seeing all the numbers under
-each other is not too useful! You can therefore feed this single column
-information into @ref{Statistics} to give you a general feeling of the
-distribution with the same command:
+But understanding the distribution by visually seeing all the numbers under 
each other is not too useful! You can therefore feed this single column 
information into @ref{Statistics} to give you a general feeling of the 
distribution with the same command:
 
 @example
 $ asttable cat.fits -cMAG_B,MAG_V | awk '@{print $1-$2@}' | aststatistics
 @end example
 
-Gnuastro's programs that accept input from standard input, only look into
-the Standard input stream if there is no first argument. In other words,
-arguments take precedence over Standard input. When no argument is
-provided, the programs check if the standard input stream is already full
-or not (output from another program is waiting to be used). If data is
-present in the standard input stream, it is used.
+Gnuastro's programs that accept input from standard input, only look into the 
Standard input stream if there is no first argument.
+In other words, arguments take precedence over Standard input.
+When no argument is provided, the programs check if the standard input stream 
is already full or not (output from another program is waiting to be used).
+If data is present in the standard input stream, it is used.
 
-When the standard input is empty, the program will wait
-@option{--stdintimeout} micro-seconds for you to manually enter the first
-line (ending with a new-line character, or the @key{ENTER} key, see
-@ref{Input output options}). If it detects the first line in this time,
-there is no more time limit, and you can manually write/type all the lines
-for as long as it takes. To inform the program that Standard input has
-finished, press @key{CTRL-D} after a new line. If the program doesn't catch
-the first line before the time-out finishes, it will abort with an error
-saying that no input was provided.
+When the standard input is empty, the program will wait 
@option{--stdintimeout} micro-seconds for you to manually enter the first line 
(ending with a new-line character, or the @key{ENTER} key, see @ref{Input 
output options}).
+If it detects the first line in this time, there is no more time limit, and 
you can manually write/type all the lines for as long as it takes.
+To inform the program that Standard input has finished, press @key{CTRL-D} 
after a new line.
+If the program doesn't catch the first line before the time-out finishes, it 
will abort with an error saying that no input was provided.
 
 @cartouche
 @noindent
-@strong{Manual input in Standard input is discarded: } Be careful that when
-you manually fill the Standard input, the data will be discarded once the
-program finishes and reproducing the result will be impossible. Therefore
-this form of providing input is only good for temporary tests.
+@strong{Manual input in Standard input is discarded:}
+Be careful that when you manually fill the Standard input, the data will be 
discarded once the program finishes and reproducing the result will be 
impossible.
+Therefore this form of providing input is only good for temporary tests.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Standard input currently only for plain text: } Currently Standard
-input only works for plain text inputs like the example above. We will
-later allow FITS files into the programs through standard input also.
+@strong{Standard input currently only for plain text:}
+Currently Standard input only works for plain text inputs like the example 
above.
+We will later allow FITS files into the programs through standard input also.
 @end cartouche
 
 
@@ -8376,14 +6267,10 @@ later allow FITS files into the programs through 
standard input also.
 @cindex Necessary parameters
 @cindex Default option values
 @cindex File system Hierarchy Standard
-Each program needs a certain number of parameters to run. Supplying
-all the necessary parameters each time you run the program is very
-frustrating and prone to errors. Therefore all the programs read the
-values for the necessary options you have not given in the command
-line from one of several plain text files (which you can view and edit
-with any text editor). These files are known as configuration files
-and are usually kept in a directory named @file{etc/} according to the
-file system hierarchy
+Each program needs a certain number of parameters to run.
+Supplying all the necessary parameters each time you run the program is very 
frustrating and prone to errors.
+Therefore all the programs read the values for the necessary options you have 
not given in the command line from one of several plain text files (which you 
can view and edit with any text editor).
+These files are known as configuration files and are usually kept in a 
directory named @file{etc/} according to the file system hierarchy
 
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard}}.
 
 @vindex --output
@@ -8391,23 +6278,12 @@ 
standard@footnote{@url{http://en.wikipedia.org/wiki/Filesystem_Hierarchy_Standar
 @cindex CPU threads, number
 @cindex Internal default value
 @cindex Number of CPU threads to use
-The thing to have in mind is that none of the programs in Gnuastro keep any
-internal default value. All the values must either be stored in one of the
-configuration files or explicitly called in the command-line. In case the
-necessary parameters are not given through any of these methods, the
-program will print a missing option error and abort. The only exception to
-this is @option{--numthreads}, whose default value is determined at
-run-time using the number of threads available to your system, see
-@ref{Multi-threaded operations}. Of course, you can still provide a default
-value for the number of threads at any of the levels below, but if you
-don't, the program will not abort. Also note that through automatic output
-name generation, the value to the @option{--output} option is also not
-mandatory on the command-line or in the configuration files for all
-programs which don't rely on that value as an input@footnote{One example of
-a program which uses the value given to @option{--output} as an input is
-ConvertType, this value specifies the type of the output through the value
-to @option{--output}, see @ref{Invoking astconvertt}.}, see @ref{Automatic
-output}.
+The thing to have in mind is that none of the programs in Gnuastro keep any 
internal default value.
+All the values must either be stored in one of the configuration files or 
explicitly called in the command-line.
+In case the necessary parameters are not given through any of these methods, 
the program will print a missing option error and abort.
+The only exception to this is @option{--numthreads}, whose default value is 
determined at run-time using the number of threads available to your system, 
see @ref{Multi-threaded operations}.
+Of course, you can still provide a default value for the number of threads at 
any of the levels below, but if you don't, the program will not abort.
+Also note that through automatic output name generation, the value to the 
@option{--output} option is also not mandatory on the command-line or in the 
configuration files for all programs which don't rely on that value as an 
input@footnote{One example of a program which uses the value given to 
@option{--output} as an input is ConvertType, this value specifies the type of 
the output through the value to @option{--output}, see @ref{Invoking 
astconvertt}.}, see @ref{Automatic output}.
 
 
 
@@ -8422,45 +6298,30 @@ output}.
 @subsection Configuration file format
 
 @cindex Configuration file suffix
-The configuration files for each program have the standard program
-executable name with a `@file{.conf}' suffix. When you download the source
-code, you can find them in the same directory as the source code of each
-program, see @ref{Program source}.
+The configuration files for each program have the standard program executable 
name with a `@file{.conf}' suffix.
+When you download the source code, you can find them in the same directory as 
the source code of each program, see @ref{Program source}.
 
 @cindex White space character
 @cindex Configuration file format
-Any line in the configuration file whose first non-white character is a
-@key{#} is considered to be a comment and is ignored. An empty line is also
-similarly ignored. The long name of the option should be used as an
-identifier. The parameter name and parameter value have to be separated by
-any number of `white-space' characters: space, tab or vertical tab. By
-default several space characters are used. If the value of an option has
-space characters (most commonly for the @option{hdu} option), then the full
-value can be enclosed in double quotation signs (@key{"}, similar to the
-example in @ref{Arguments and options}). If it is an option without a value
-in the @option{--help} output (on/off option, see @ref{Options}), then the
-value should be @option{1} if it is to be `on' and @option{0} otherwise.
-
-In each non-commented and non-blank line, any text after the first two
-words (option identifier and value) is ignored. If an option identifier is
-not recognized in the configuration file, the name of the file, the line
-number of the unrecognized option, and the unrecognized identifier name
-will be reported and the program will abort. If a parameter is repeated
-more more than once in the configuration files, accepts only one value, and
-is not set on the command-line, then only the first value will be used, the
-rest will be ignored.
+Any line in the configuration file whose first non-white character is a 
@key{#} is considered to be a comment and is ignored.
+An empty line is also similarly ignored.
+The long name of the option should be used as an identifier.
+The parameter name and parameter value have to be separated by any number of 
`white-space' characters: space, tab or vertical tab.
+By default several space characters are used.
+If the value of an option has space characters (most commonly for the 
@option{hdu} option), then the full value can be enclosed in double quotation 
signs (@key{"}, similar to the example in @ref{Arguments and options}).
+If it is an option without a value in the @option{--help} output (on/off 
option, see @ref{Options}), then the value should be @option{1} if it is to be 
`on' and @option{0} otherwise.
+
+In each non-commented and non-blank line, any text after the first two words 
(option identifier and value) is ignored.
+If an option identifier is not recognized in the configuration file, the name 
of the file, the line number of the unrecognized option, and the unrecognized 
identifier name will be reported and the program will abort.
+If a parameter is repeated more more than once in the configuration files, 
accepts only one value, and is not set on the command-line, then only the first 
value will be used, the rest will be ignored.
 
 @cindex Writing configuration files
 @cindex Automatic configuration file writing
 @cindex Configuration files, writing
-You can build or edit any of the directories and the configuration files
-yourself using any text editor.  However, it is recommended to use the
-@option{--setdirconf} and @option{--setusrconf} options to set default
-values for the current directory or this user, see @ref{Operating mode
-options}. With these options, the values you give will be checked before
-writing in the configuration file. They will also print a set of commented
-lines guiding the reader and will also classify the options based on their
-context and write them in their logical order to be more understandable.
+You can build or edit any of the directories and the configuration files 
yourself using any text editor.
+However, it is recommended to use the @option{--setdirconf} and 
@option{--setusrconf} options to set default values for the current directory 
or this user, see @ref{Operating mode options}.
+With these options, the values you give will be checked before writing in the 
configuration file.
+They will also print a set of commented lines guiding the reader and will also 
classify the options based on their context and write them in their logical 
order to be more understandable.
 
 
 @node Configuration file precedence, Current directory and User wide, 
Configuration file format, Configuration files
@@ -8469,105 +6330,68 @@ context and write them in their logical order to be 
more understandable.
 @cindex Configuration file precedence
 @cindex Configuration file directories
 @cindex Precedence, configuration files
-The option values in all the programs of Gnuastro will be filled in the
-following order. If an option only takes one value which is given in an
-earlier step, any value for that option in a later step will be
-ignored. Note that if the @option{lastconfig} option is specified in any
-step below, no other configuration files will be parsed (see @ref{Operating
-mode options}).
+The option values in all the programs of Gnuastro will be filled in the 
following order.
+If an option only takes one value which is given in an earlier step, any value 
for that option in a later step will be ignored.
+Note that if the @option{lastconfig} option is specified in any step below, no 
other configuration files will be parsed (see @ref{Operating mode options}).
 
 @enumerate
 @item
 Command-line options, for a particular run of ProgramName.
 
 @item
-@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current
-directory.
+@file{.gnuastro/astprogname.conf} is parsed by ProgramName in the current 
directory.
 
 @item
-@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the
-current directory.
+@file{.gnuastro/gnuastro.conf} is parsed by all Gnuastro programs in the 
current directory.
 
 @item
-@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the
-user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/astprogname.conf} is parsed by ProgramName in the 
user's home directory (see @ref{Current directory and User wide}).
 
 @item
-@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in
-the user's home directory (see @ref{Current directory and User wide}).
+@file{$HOME/.local/etc/gnuastro.conf} is parsed by all Gnuastro programs in 
the user's home directory (see @ref{Current directory and User wide}).
 
 @item
-@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/astprogname.conf} is parsed by ProgramName in the system-wide 
installation directory (see @ref{System wide} for @file{prefix}).
 
 @item
-@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the
-system-wide installation directory (see @ref{System wide} for
-@file{prefix}).
+@file{prefix/etc/gnuastro.conf} is parsed by all Gnuastro programs in the 
system-wide installation directory (see @ref{System wide} for @file{prefix}).
 
 @end enumerate
 
-The basic idea behind setting this progressive state of checking for
-parameter values is that separate users of a computer or separate folders
-in a user's file system might need different values for some
-parameters.
+The basic idea behind setting this progressive state of checking for parameter 
values is that separate users of a computer or separate folders in a user's 
file system might need different values for some parameters.
 
 @cartouche
 @noindent
-@strong{Checking the order:} You can confirm/check the order of parsing
-configuration files using the @option{--checkconfig} option with any
-Gnuastro program, see @ref{Operating mode options}. Just be sure to place
-this option immediately after the program name, before any other option.
+@strong{Checking the order:}
+You can confirm/check the order of parsing configuration files using the 
@option{--checkconfig} option with any Gnuastro program, see @ref{Operating 
mode options}.
+Just be sure to place this option immediately after the program name, before 
any other option.
 @end cartouche
 
-As you see above, there can also be a configuration file containing the
-common options in all the programs: @file{gnuastro.conf} (see @ref{Common
-options}). If options specific to one program are specified in this file,
-there will be unrecognized option errors, or unexpected behavior if the
-option has different behavior in another program. On the other hand, there
-is no problem with @file{astprogname.conf} containing common
-options@footnote{As an example, the @option{--setdirconf} and
-@option{--setusrconf} options will also write the common options they have
-read in their produced @file{astprogname.conf}.}.
+As you see above, there can also be a configuration file containing the common 
options in all the programs: @file{gnuastro.conf} (see @ref{Common options}).
+If options specific to one program are specified in this file, there will be 
unrecognized option errors, or unexpected behavior if the option has different 
behavior in another program.
+On the other hand, there is no problem with @file{astprogname.conf} containing 
common options@footnote{As an example, the @option{--setdirconf} and 
@option{--setusrconf} options will also write the common options they have read 
in their produced @file{astprogname.conf}.}.
 
 @cartouche
 @noindent
-@strong{Manipulating the order:} You can manipulate this order or add new
-files with the following two options which are fully described in
+@strong{Manipulating the order:} You can manipulate this order or add new 
files with the following two options which are fully described in
 @ref{Operating mode options}:
 @table @option
 @item --config
-Allows you to define any file to be parsed as a configuration file on the
-command-line or within the any other configuration file. Recall that the
-file given to @option{--config} is parsed immediately when this option is
-confronted (on the command-line or in a configuration file).
+Allows you to define any file to be parsed as a configuration file on the 
command-line or within the any other configuration file.
+Recall that the file given to @option{--config} is parsed immediately when 
this option is confronted (on the command-line or in a configuration file).
 
 @item --lastconfig
-Allows you to stop the parsing of subsequent configuration files. Note that
-if this option is given in a configuration file, it will be fully read, so
-its position in the configuration doesn't matter (unlike
-@option{--config}).
+Allows you to stop the parsing of subsequent configuration files.
+Note that if this option is given in a configuration file, it will be fully 
read, so its position in the configuration doesn't matter (unlike 
@option{--config}).
 @end table
 @end cartouche
 
+One example of benefiting from these configuration files can be this: raw 
telescope images usually have their main image extension in the second FITS 
extension, while processed FITS images usually only have one extension.
+If your system-wide default input extension is 0 (the first), then when you 
want to work with the former group of data you have to explicitly mention it to 
the programs every time.
+With this progressive state of default values to check, you can set different 
default values for the different directories that you would like to run 
Gnuastro in for your different purposes, so you won't have to worry about this 
issue any more.
 
-One example of benefiting from these configuration files can be this: raw
-telescope images usually have their main image extension in the second FITS
-extension, while processed FITS images usually only have one extension. If
-your system-wide default input extension is 0 (the first), then when you
-want to work with the former group of data you have to explicitly mention
-it to the programs every time. With this progressive state of default
-values to check, you can set different default values for the different
-directories that you would like to run Gnuastro in for your different
-purposes, so you won't have to worry about this issue any more.
-
-The same can be said about the @file{gnuastro.conf} files: by specifying a
-behavior in this single file, all Gnuastro programs in the respective
-directory, user, or system-wide steps will behave similarly. For example to
-keep the input's directory when no specific output is given (see
-@ref{Automatic output}), or to not delete an existing file if it has the
-same name as a given output (see @ref{Input output options}).
+The same can be said about the @file{gnuastro.conf} files: by specifying a 
behavior in this single file, all Gnuastro programs in the respective 
directory, user, or system-wide steps will behave similarly.
+For example to keep the input's directory when no specific output is given 
(see @ref{Automatic output}), or to not delete an existing file if it has the 
same name as a given output (see @ref{Input output options}).
 
 
 @node Current directory and User wide, System wide, Configuration file 
precedence, Configuration files
@@ -8576,24 +6400,17 @@ same name as a given output (see @ref{Input output 
options}).
 @cindex @file{$HOME}
 @cindex @file{./.gnuastro/}
 @cindex @file{$HOME/.local/etc/}
-For the current (local) and user-wide directories, the configuration files
-are stored in the hidden sub-directories named @file{.gnuastro/} and
-@file{$HOME/.local/etc/} respectively. Unless you have changed it, the
-@file{$HOME} environment variable should point to your home directory. You
-can check it by running @command{$ echo $HOME}. Each time you run any of
-the programs in Gnuastro, this environment variable is read and placed in
-the above address. So if you suddenly see that your home configuration
-files are not being read, probably you (or some other program) has changed
-the value of this environment variable.
+For the current (local) and user-wide directories, the configuration files are 
stored in the hidden sub-directories named @file{.gnuastro/} and 
@file{$HOME/.local/etc/} respectively.
+Unless you have changed it, the @file{$HOME} environment variable should point 
to your home directory.
+You can check it by running @command{$ echo $HOME}.
+Each time you run any of the programs in Gnuastro, this environment variable 
is read and placed in the above address.
+So if you suddenly see that your home configuration files are not being read, 
probably you (or some other program) has changed the value of this environment 
variable.
 
 @vindex --setdirconf
 @vindex --setusrconf
-Although it might cause confusions like above, this dependence on the
-@file{HOME} environment variable enables you to temporarily use a different
-directory as your home directory. This can come in handy in complicated
-situations. To set the user or current directory configuration files based
-on your command-line input, you can use the @option{--setdirconf} or
-@option{--setusrconf}, see @ref{Operating mode options}.
+Although it might cause confusions like above, this dependence on the 
@file{HOME} environment variable enables you to temporarily use a different 
directory as your home directory.
+This can come in handy in complicated situations.
+To set the user or current directory configuration files based on your 
command-line input, you can use the @option{--setdirconf} or 
@option{--setusrconf}, see @ref{Operating mode options}.
 
 
 
@@ -8603,28 +6420,16 @@ on your command-line input, you can use the 
@option{--setdirconf} or
 @cindex @file{prefix/etc/}
 @cindex System wide configuration files
 @cindex Configuration files, system wide
-When Gnuastro is installed, the configuration files that are shipped with
-the distribution are copied into the (possibly system wide)
-@file{prefix/etc/} directory. For more details on @file{prefix}, see
-@ref{Installation directory} (by default it is: @file{/usr/local}). This
-directory is the final place (with the lowest priority) that the programs
-in Gnuastro will check to retrieve parameter values.
+When Gnuastro is installed, the configuration files that are shipped with the 
distribution are copied into the (possibly system wide) @file{prefix/etc/} 
directory.
+For more details on @file{prefix}, see @ref{Installation directory} (by 
default it is: @file{/usr/local}).
+This directory is the final place (with the lowest priority) that the programs 
in Gnuastro will check to retrieve parameter values.
 
-If you remove an option and its value from the system wide configuration
-files, you either have to specify it in more immediate configuration files
-or set it each time in the command-line. Recall that none of the programs
-in Gnuastro keep any internal default values and will abort if they don't
-find a value for the necessary parameters (except the number of threads and
-output file name). So even though you might never expect to use an optional
-option, it safe to have it available in this system-wide configuration file
-even if you don't intend to use it frequently.
+If you remove an option and its value from the system wide configuration 
files, you either have to specify it in more immediate configuration files or 
set it each time in the command-line.
+Recall that none of the programs in Gnuastro keep any internal default values 
and will abort if they don't find a value for the necessary parameters (except 
the number of threads and output file name).
+So even though you might never expect to use an optional option, it safe to 
have it available in this system-wide configuration file even if you don't 
intend to use it frequently.
 
-Note that in case you install Gnuastro from your distribution's
-repositories, @file{prefix} will either be set to @file{/} (the root
-directory) or @file{/usr}, so you can find the system wide configuration
-variables in @file{/etc/} or @file{/usr/etc/}. The prefix of
-@file{/usr/local/} is conventionally used for programs you install from
-source by your self as in @ref{Quick start}.
+Note that in case you install Gnuastro from your distribution's repositories, 
@file{prefix} will either be set to @file{/} (the root directory) or 
@file{/usr}, so you can find the system wide configuration variables in 
@file{/etc/} or @file{/usr/etc/}.
+The prefix of @file{/usr/local/} is conventionally used for programs you 
install from source by your self as in @ref{Quick start}.
 
 
 
@@ -8642,43 +6447,30 @@ source by your self as in @ref{Quick start}.
 @cindex Book formats
 @cindex Remembering options
 @cindex Convenient book formats
-Probably the first time you read this book, it is either in the PDF
-or HTML formats. These two formats are very convenient for when you
-are not actually working, but when you are only reading. Later on,
-when you start to use the programs and you are deep in the middle of
-your work, some of the details will inevitably be forgotten. Going to
-find the PDF file (printed or digital) or the HTML webpage is a major
-distraction.
+Probably the first time you read this book, it is either in the PDF or HTML 
formats.
+These two formats are very convenient for when you are not actually working, 
but when you are only reading.
+Later on, when you start to use the programs and you are deep in the middle of 
your work, some of the details will inevitably be forgotten.
+Going to find the PDF file (printed or digital) or the HTML webpage is a major 
distraction.
 
 @cindex Online help
 @cindex Command-line help
-GNU software have a very unique set of tools for aiding your memory on
-the command-line, where you are working, depending how much of it you
-need to remember. In the past, such command-line help was known as
-``online'' help, because they were literally provided to you `on'
-the command `line'. However, nowadays the word ``online'' refers to
-something on the internet, so that term will not be used. With this
-type of help, you can resume your exciting research without taking
-your hands off the keyboard.
+GNU software have a very unique set of tools for aiding your memory on the 
command-line, where you are working, depending how much of it you need to 
remember.
+In the past, such command-line help was known as ``online'' help, because they 
were literally provided to you `on' the command `line'.
+However, nowadays the word ``online'' refers to something on the internet, so 
that term will not be used.
+With this type of help, you can resume your exciting research without taking 
your hands off the keyboard.
 
 @cindex Installed help methods
-Another major advantage of such command-line based help routines is
-that they are installed with the software in your computer, therefore
-they are always in sync with the executable you are actually
-running. Three of them are actually part of the executable. You don't
-have to worry about the version of the book or program. If you rely
-on external help (a PDF in your personal print or digital archive or
-HTML from the official webpage) you have to check to see if their
-versions fit with your installed program.
-
-If you only need to remember the short or long names of the options,
-@option{--usage} is advised. If it is what the options do, then
-@option{--help} is a great tool. Man pages are also provided for those
-who are use to this older system of documentation. This full book is
-also available to you on the command-line in Info format. If none of
-these seems to resolve the problems, there is a mailing list which
-enables you to get in touch with experienced Gnuastro users. In the
-subsections below each of these methods are reviewed.
+Another major advantage of such command-line based help routines is that they 
are installed with the software in your computer, therefore they are always in 
sync with the executable you are actually running.
+Three of them are actually part of the executable.
+You don't have to worry about the version of the book or program.
+If you rely on external help (a PDF in your personal print or digital archive 
or HTML from the official webpage) you have to check to see if their versions 
fit with your installed program.
+
+If you only need to remember the short or long names of the options, 
@option{--usage} is advised.
+If it is what the options do, then @option{--help} is a great tool.
+Man pages are also provided for those who are use to this older system of 
documentation.
+This full book is also available to you on the command-line in Info format.
+If none of these seems to resolve the problems, there is a mailing list which 
enables you to get in touch with experienced Gnuastro users.
+In the subsections below each of these methods are reviewed.
 
 
 @menu
@@ -8695,10 +6487,10 @@ subsections below each of these methods are reviewed.
 @cindex Usage pattern
 @cindex Mandatory arguments
 @cindex Optional and mandatory tokens
-If you give this option, the program will not run. It will only print a
-very concise message showing the options and arguments. Everything within
-square brackets (@option{[]}) is optional. For example here are the first
-and last two lines of Crop's @option{--usage} is shown:
+If you give this option, the program will not run.
+It will only print a very concise message showing the options and arguments.
+Everything within square brackets (@option{[]}) is optional.
+For example here are the first and last two lines of Crop's @option{--usage} 
is shown:
 
 @example
 $ astcrop --usage
@@ -8709,77 +6501,65 @@ Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT] 
[-w INT]
             [ASCIIcatalog] FITSimage(s).fits
 @end example
 
-There are no explanations on the options, just their short and long
-names shown separately. After the program name, the short format of
-all the options that don't require a value (on/off options) is
-displayed. Those that do require a value then follow in separate
-brackets, each displaying the format of the input they want, see
-@ref{Options}. Since all options are optional, they are shown in
-square brackets, but arguments can also be optional. For example in
-this example, a catalog name is optional and is only required in some
-modes. This is a standard method of displaying optional arguments for
-all GNU software.
+There are no explanations on the options, just their short and long names 
shown separately.
+After the program name, the short format of all the options that don't require 
a value (on/off options) is displayed.
+Those that do require a value then follow in separate brackets, each 
displaying the format of the input they want, see @ref{Options}.
+Since all options are optional, they are shown in square brackets, but 
arguments can also be optional.
+For example in this example, a catalog name is optional and is only required 
in some modes.
+This is a standard method of displaying optional arguments for all GNU 
software.
 
 @node --help, Man pages, --usage, Getting help
 @subsection @option{--help}
 
 @vindex --help
-If the command-line includes this option, the program will not be
-run. It will print a complete list of all available options along with
-a short explanation. The options are also grouped by their
-context. Within each context, the options are sorted
-alphabetically. Since the options are shown in detail afterwards, the
-first line of the @option{--help} output shows the arguments and if
-they are optional or not, similar to @ref{--usage}.
-
-In the @option{--help} output of all programs in Gnuastro, the
-options for each program are classified based on context. The first
-two contexts are always options to do with the input and output
-respectively. For example input image extensions or supplementary
-input files for the inputs. The last class of options is also fixed in
-all of Gnuastro, it shows operating mode options. Most of these
-options are already explained in @ref{Operating mode options}.
+If the command-line includes this option, the program will not be run.
+It will print a complete list of all available options along with a short 
explanation.
+The options are also grouped by their context.
+Within each context, the options are sorted alphabetically.
+Since the options are shown in detail afterwards, the first line of the 
@option{--help} output shows the arguments and if they are optional or not, 
similar to @ref{--usage}.
+
+In the @option{--help} output of all programs in Gnuastro, the options for 
each program are classified based on context.
+The first two contexts are always options to do with the input and output 
respectively.
+For example input image extensions or supplementary input files for the inputs.
+The last class of options is also fixed in all of Gnuastro, it shows operating 
mode options.
+Most of these options are already explained in @ref{Operating mode options}.
 
 @cindex Long outputs
 @cindex Redirection of output
 @cindex Command-line, long outputs
-The help message will sometimes be longer than the vertical size of
-your terminal. If you are using a graphical user interface terminal
-emulator, you can scroll the terminal with your mouse, but we promised
-no mice distractions! So here are some suggestions:
+The help message will sometimes be longer than the vertical size of your 
terminal.
+If you are using a graphical user interface terminal emulator, you can scroll 
the terminal with your mouse, but we promised no mice distractions! So here are 
some suggestions:
 
 @itemize
 @item
 @cindex Scroll command-line
 @cindex Command-line scroll
 @cindex @key{Shift + PageUP} and @key{Shift + PageDown}
-@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll
-down. For most help output this should be enough. The problem is that
-it is limited by the number of lines that your terminal keeps in
-memory and that you can't scroll by lines, only by whole screens.
+@key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
+For most help output this should be enough.
+The problem is that it is limited by the number of lines that your terminal 
keeps in memory and that you can't scroll by lines, only by whole screens.
 
 @item
 @cindex Pipe
 @cindex @command{less}
-Pipe to @command{less}. A pipe is a form of shell re-direction. The
-@command{less} tool in Unix-like systems was made exactly for such
-outputs of any length. You can pipe (@command{|}) the output of any
-program that is longer than the screen to it and then you can scroll
-through (up and down) with its many tools. For example:
+Pipe to @command{less}.
+A pipe is a form of shell re-direction.
+The @command{less} tool in Unix-like systems was made exactly for such outputs 
of any length.
+You can pipe (@command{|}) the output of any program that is longer than the 
screen to it and then you can scroll through (up and down) with its many tools.
+For example:
 @example
 $ astnoisechisel --help | less
 @end example
 @noindent
-Once you have gone through the text, you can quit @command{less} by
-pressing the @key{q} key.
+Once you have gone through the text, you can quit @command{less} by pressing 
the @key{q} key.
 
 
 @item
 @cindex Save output to file
 @cindex Redirection of output
-Redirect to a file. This is a less convenient way, because you will
-then have to open the file in a text editor! You can do this with the
-shell redirection tool (@command{>}):
+Redirect to a file.
+This is a less convenient way, because you will then have to open the file in 
a text editor!
+You can do this with the shell redirection tool (@command{>}):
 @example
 $ astnoisechisel --help > filename.txt
 @end example
@@ -8788,10 +6568,9 @@ $ astnoisechisel --help > filename.txt
 @cindex GNU Grep
 @cindex Searching text
 @cindex Command-line searching text
-In case you have a special keyword you are looking for in the help, you
-don't have to go through the full list. GNU Grep is made for this job. For
-example if you only want the list of options whose @option{--help} output
-contains the word ``axis'' in Crop, you can run the following command:
+In case you have a special keyword you are looking for in the help, you don't 
have to go through the full list.
+GNU Grep is made for this job.
+For example if you only want the list of options whose @option{--help} output 
contains the word ``axis'' in Crop, you can run the following command:
 
 @example
 $ astcrop --help | grep axis
@@ -8801,42 +6580,28 @@ $ astcrop --help | grep axis
 @cindex Argp argument parser
 @cindex Customize @option{--help} output
 @cindex @option{--help} output customization
-If the output of this option does not fit nicely within the confines
-of your terminal, GNU does enable you to customize its output through
-the environment variable @code{ARGP_HELP_FMT}, you can set various
-parameters which specify the formatting of the help messages. For
-example if your terminals are wider than 70 spaces (say 100) and you
-feel there is too much empty space between the long options and the
-short explanation, you can change these formats by giving values to
-this environment variable before running the program with the
-@option{--help} output. You can define this environment variable in
-this manner:
+If the output of this option does not fit nicely within the confines of your 
terminal, GNU does enable you to customize its output through the environment 
variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the 
formatting of the help messages.
+For example if your terminals are wider than 70 spaces (say 100) and you feel 
there is too much empty space between the long options and the short 
explanation, you can change these formats by giving values to this environment 
variable before running the program with the @option{--help} output.
+You can define this environment variable in this manner:
 @example
 $ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
 @end example
 @cindex @file{.bashrc}
-This will affect all GNU programs using GNU C library's @file{argp.h}
-facilities as long as the environment variable is in memory. You can
-see the full list of these formatting parameters in the ``Argp User
-Customization'' part of the GNU C library manual. If you are more
-comfortable to read the @option{--help} outputs of all GNU software in
-your customized format, you can add your customization (similar to
-the line above, without the @command{$} sign) to your @file{~/.bashrc}
-file. This is a standard option for all GNU software.
+This will affect all GNU programs using GNU C library's @file{argp.h} 
facilities as long as the environment variable is in memory.
+You can see the full list of these formatting parameters in the ``Argp User 
Customization'' part of the GNU C library manual.
+If you are more comfortable to read the @option{--help} outputs of all GNU 
software in your customized format, you can add your customization (similar to 
the line above, without the @command{$} sign) to your @file{~/.bashrc} file.
+This is a standard option for all GNU software.
 
 @node Man pages, Info, --help, Getting help
 @subsection Man pages
 @cindex Man pages
-Man pages were the Unix method of providing command-line documentation
-to a program. With GNU Info, see @ref{Info} the usage of this method
-of documentation is highly discouraged. This is because Info provides
-a much more easier to navigate and read environment.
+Man pages were the Unix method of providing command-line documentation to a 
program.
+With GNU Info, see @ref{Info} the usage of this method of documentation is 
highly discouraged.
+This is because Info provides a much more easier to navigate and read 
environment.
 
-However, some operating systems require a man page for packages that
-are installed and some people are still used to this method of command
-line help. So the programs in Gnuastro also have Man pages which are
-automatically generated from the outputs of @option{--version} and
-@option{--help} using the GNU help2man program. So if you run
+However, some operating systems require a man page for packages that are 
installed and some people are still used to this method of command line help.
+So the programs in Gnuastro also have Man pages which are automatically 
generated from the outputs of @option{--version} and @option{--help} using the 
GNU help2man program.
+So if you run
 @example
 $ man programname
 @end example
@@ -8853,19 +6618,12 @@ standard manner.
 
 @cindex GNU Info
 @cindex Command-line, viewing full book
-Info is the standard documentation format for all GNU software. It is
-a very useful command-line document viewing format, fully equipped
-with links between the various pages and menus and search
-capabilities. As explained before, the best thing about it is that it
-is available for you the moment you need to refresh your memory on any
-command-line tool in the middle of your work without having to take
-your hands off the keyboard. This complete book is available in Info
-format and can be accessed from anywhere on the command-line.
+Info is the standard documentation format for all GNU software.
+It is a very useful command-line document viewing format, fully equipped with 
links between the various pages and menus and search capabilities.
+As explained before, the best thing about it is that it is available for you 
the moment you need to refresh your memory on any command-line tool in the 
middle of your work without having to take your hands off the keyboard.
+This complete book is available in Info format and can be accessed from 
anywhere on the command-line.
 
-To open the Info format of any installed programs or library on your
-system which has an Info format book, you can simply run the command
-below (change @command{executablename} to the executable name of the
-program or library):
+To open the Info format of any installed programs or library on your system 
which has an Info format book, you can simply run the command below (change 
@command{executablename} to the executable name of the program or library):
 
 @example
 $ info executablename
@@ -8874,23 +6632,16 @@ $ info executablename
 @noindent
 @cindex Learning GNU Info
 @cindex GNU software documentation
-In case you are not already familiar with it, run @command{$ info
-info}. It does a fantastic job in explaining all its capabilities its
-self. It is very short and you will become sufficiently fluent in
-about half an hour. Since all GNU software documentation is also
-provided in Info, your whole GNU/Linux life will significantly
-improve.
+In case you are not already familiar with it, run @command{$ info info}.
+It does a fantastic job in explaining all its capabilities its self.
+It is very short and you will become sufficiently fluent in about half an hour.
+Since all GNU software documentation is also provided in Info, your whole 
GNU/Linux life will significantly improve.
 
 @cindex GNU Emacs
 @cindex GNU C library
-Once you've become an efficient navigator in Info, you can go to any
-part of this book or any other GNU software or library manual, no
-matter how long it is, in a matter of seconds. It also blends nicely
-with GNU Emacs (a text editor) and you can search manuals while you
-are writing your document or programs without taking your hands off
-the keyboard, this is most useful for libraries like the GNU C
-library. To be able to access all the Info manuals installed in your
-GNU/Linux within Emacs, type @key{Ctrl-H + i}.
+Once you've become an efficient navigator in Info, you can go to any part of 
this book or any other GNU software or library manual, no matter how long it 
is, in a matter of seconds.
+It also blends nicely with GNU Emacs (a text editor) and you can search 
manuals while you are writing your document or programs without taking your 
hands off the keyboard, this is most useful for libraries like the GNU C 
library.
+To be able to access all the Info manuals installed in your GNU/Linux within 
Emacs, type @key{Ctrl-H + i}.
 
 To see this whole book from the beginning in Info, you can run
 
@@ -8907,18 +6658,16 @@ $ info astprogramname
 @end example
 
 @noindent
-you will be taken to the section titled ``Invoking ProgramName'' which
-explains the inputs and outputs along with the command-line options for
-that program. Finally, if you run Info with the official program name, for
-example Crop or NoiseChisel:
+you will be taken to the section titled ``Invoking ProgramName'' which 
explains the inputs and outputs along with the command-line options for that 
program.
+Finally, if you run Info with the official program name, for example Crop or 
NoiseChisel:
 
 @example
 $ info ProgramName
 @end example
 
 @noindent
-you will be taken to the top section which introduces the
-program. Note that in all cases, Info is not case sensitive.
+you will be taken to the top section which introduces the program.
+Note that in all cases, Info is not case sensitive.
 
 
 
@@ -8927,23 +6676,17 @@ program. Note that in all cases, Info is not case 
sensitive.
 
 @cindex help-gnuastro mailing list
 @cindex Mailing list: help-gnuastro
-Gnuastro maintains the help-gnuastro mailing list for users to ask any
-questions related to Gnuastro. The experienced Gnuastro users and some
-of its developers are subscribed to this mailing list and your email
-will be sent to them immediately. However, when contacting this
-mailing list please have in mind that they are possibly very busy and
-might not be able to answer immediately.
+Gnuastro maintains the help-gnuastro mailing list for users to ask any 
questions related to Gnuastro.
+The experienced Gnuastro users and some of its developers are subscribed to 
this mailing list and your email will be sent to them immediately.
+However, when contacting this mailing list please have in mind that they are 
possibly very busy and might not be able to answer immediately.
 
 @cindex Mailing list archives
 @cindex @code{help-gnuastro@@gnu.org}
-To ask a question from this mailing list, send a mail to
-@code{help-gnuastro@@gnu.org}. Anyone can view the mailing list
-archives at @url{http://lists.gnu.org/archive/html/help-gnuastro/}. It
-is best that before sending a mail, you search the archives to see if
-anyone has asked a question similar to yours. If you want to make a
-suggestion or report a bug, please don't send a mail to this mailing
-list. We have other mailing lists and tools for those purposes, see
-@ref{Report a bug} or @ref{Suggest new feature}.
+To ask a question from this mailing list, send a mail to 
@code{help-gnuastro@@gnu.org}.
+Anyone can view the mailing list archives at 
@url{http://lists.gnu.org/archive/html/help-gnuastro/}.
+It is best that before sending a mail, you search the archives to see if 
anyone has asked a question similar to yours.
+If you want to make a suggestion or report a bug, please don't send a mail to 
this mailing list.
+We have other mailing lists and tools for those purposes, see @ref{Report a 
bug} or @ref{Suggest new feature}.
 
 
 
@@ -8957,80 +6700,54 @@ list. We have other mailing lists and tools for those 
purposes, see
 @node Installed scripts, Multi-threaded operations, Getting help, Common 
program behavior
 @section Installed scripts
 
-Gnuastro's programs (introduced in previous chapters) are designed to be
-highly modular and thus mainly contain lower-level operations on the
-data. However, in many contexts, higher-level operations (for example a
-sequence of calls to multiple Gnuastro programs, or a special way of
-running a program and using the outputs) are also very similar between
-various projects.
+Gnuastro's programs (introduced in previous chapters) are designed to be 
highly modular and thus mainly contain lower-level operations on the data.
+However, in many contexts, higher-level operations (for example a sequence of 
calls to multiple Gnuastro programs, or a special way of running a program and 
using the outputs) are also very similar between various projects.
 
-To facilitate data analysis on these higher-level steps also, Gnuastro also
-installs some scripts on your system with the (@code{astscript-}) prefix
-(in contrast to the other programs that only have the @code{ast}
-prefix).
+To facilitate data analysis on these higher-level steps also, Gnuastro also 
installs some scripts on your system with the (@code{astscript-}) prefix (in 
contrast to the other programs that only have the @code{ast} prefix).
 
 @cindex GNU Bash
-Like all of Gnuastro's source code, these scripts are also heavily
-commented. They are written in GNU Bash, which doesn't need
-compilation. Therefore, if you open the installed scripts in a text editor,
-you can actually read them@footnote{Gnuastro's installed programs (those
-only starting with @code{ast}) aren't human-readable. They are written in C
-and are thus compiled (optimized in binary CPU instructions that will be
-given directly to your CPU). Because they don't need an interpreter like
-Bash on every run, they are much faster and more independent than
-scripts. To read the source code of the programs, look into the
-@file{bin/progname} directory of Gnuastro's source (@ref{Downloading the
-source}). If you would like to read more about why C was chosen for the
-programs, please see @ref{Why C}.}. Bash is the same language that is
-mainly used when typing on the command-line. Because of these factors, Bash
-is much more widely known and used than C (the language of other Gnuastro
-programs). Gnuastro's installed scripts also do higher-level operations, so
-customizing these scripts for a special project will be more common than
-the programs. You can always inspect them (to customize, check, or educate
-your self) with this command (just replace @code{emacs} with your favorite
-text editor):
+Like all of Gnuastro's source code, these scripts are also heavily commented.
+They are written in GNU Bash, which doesn't need compilation.
+Therefore, if you open the installed scripts in a text editor, you can 
actually read them@footnote{Gnuastro's installed programs (those only starting 
with @code{ast}) aren't human-readable.
+They are written in C and are thus compiled (optimized in binary CPU 
instructions that will be given directly to your CPU).
+Because they don't need an interpreter like Bash on every run, they are much 
faster and more independent than scripts.
+To read the source code of the programs, look into the @file{bin/progname} 
directory of Gnuastro's source (@ref{Downloading the source}).
+If you would like to read more about why C was chosen for the programs, please 
see @ref{Why C}.}.
+Bash is the same language that is mainly used when typing on the command-line.
+Because of these factors, Bash is much more widely known and used than C (the 
language of other Gnuastro programs).
+Gnuastro's installed scripts also do higher-level operations, so customizing 
these scripts for a special project will be more common than the programs.
+You can always inspect them (to customize, check, or educate your self) with 
this command (just replace @code{emacs} with your favorite text editor):
 
 @example
 $ emacs $(which astscript-NAME)
 @end example
 
-These scripts also accept options and are in many ways similar to the
-programs (see @ref{Common options}) with some minor differences:
+These scripts also accept options and are in many ways similar to the programs 
(see @ref{Common options}) with some minor differences:
 
 @itemize
 @item
-Currently they don't accept configuration files themselves. However, the
-configuration files of the Gnuastro programs they call are indeed parsed
-and used by those programs.
+Currently they don't accept configuration files themselves.
+However, the configuration files of the Gnuastro programs they call are indeed 
parsed and used by those programs.
 
-As a result, they don't have the following options: @option{--checkconfig},
-@option{--config}, @option{--lastconfig}, @option{--onlyversion},
-@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
+As a result, they don't have the following options: @option{--checkconfig}, 
@option{--config}, @option{--lastconfig}, @option{--onlyversion}, 
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
 
 @item
-They don't directly allocate any memory, so there is no
-@option{--minmapsize}.
+They don't directly allocate any memory, so there is no @option{--minmapsize}.
 
 @item
-They don't have an independent @option{--usage} option: when called with
-@option{--usage}, they just recommend running @option{--help}.
+They don't have an independent @option{--usage} option: when called with 
@option{--usage}, they just recommend running @option{--help}.
 
 @item
-The output of @option{--help} is not configurable like the programs (see
-@ref{--help}).
+The output of @option{--help} is not configurable like the programs (see 
@ref{--help}).
 
 @item
 @cindex GNU AWK
 @cindex GNU SED
-The scripts will commonly use your installed Bash and other basic
-command-line tools (for example AWK or SED). Different systems have
-different versions and implementations of these basic tools (for example
-GNU/Linux systems use GNU AWK and GNU SED which are far more advanced and
-up to date then the minimalist AWK and SED of most other
-systems). Therefore, unexpected errors in these tools might come up when
-you run these scripts. We will try our best to write these scripts in a
-portable way. However, if you do confront such strange errors, please
-submit a bug report so we fix it (see @ref{Report a bug}).
+The scripts will commonly use your installed Bash and other basic command-line 
tools (for example AWK or SED).
+Different systems have different versions and implementations of these basic 
tools (for example GNU/Linux systems use GNU AWK and GNU SED which are far more 
advanced and up to date then the minimalist AWK and SED of most other systems).
+Therefore, unexpected errors in these tools might come up when you run these 
scripts.
+We will try our best to write these scripts in a portable way.
+However, if you do confront such strange errors, please submit a bug report so 
we fix it (see @ref{Report a bug}).
 
 @end itemize
 
@@ -9055,26 +6772,18 @@ submit a bug report so we fix it (see @ref{Report a 
bug}).
 @cindex Multi-threaded programs
 @cindex Using multiple CPU cores
 @cindex Simultaneous multithreading
-Some of the programs benefit significantly when you use all the threads
-your computer's CPU has to offer to your operating system. The number of
-threads available can be larger than the number of physical (hardware)
-cores in the CPU (also known as Simultaneous multithreading). For example,
-in Intel's CPUs (those that implement its Hyper-threading technology) the
-number of threads is usually double the number of physical cores in your
-CPU. On a GNU/Linux system, the number of threads available can be found
-with the command @command{$ nproc} command (part of GNU Coreutils).
+Some of the programs benefit significantly when you use all the threads your 
computer's CPU has to offer to your operating system.
+The number of threads available can be larger than the number of physical 
(hardware) cores in the CPU (also known as Simultaneous multithreading).
+For example, in Intel's CPUs (those that implement its Hyper-threading 
technology) the number of threads is usually double the number of physical 
cores in your CPU.
+On a GNU/Linux system, the number of threads available can be found with the 
command @command{$ nproc} command (part of GNU Coreutils).
 
 @vindex --numthreads
 @cindex Number of threads available
 @cindex Available number of threads
 @cindex Internally stored option value
-Gnuastro's programs can find the number of threads available to your system
-internally at run-time (when you execute the program). However, if a value
-is given to the @option{--numthreads} option, the given number will be
-used, see @ref{Operating mode options} and @ref{Configuration files} for ways 
to
-use this option. Thus @option{--numthreads} is the only common option in
-Gnuastro's programs with a value that doesn't have to be specified anywhere
-on the command-line or in the configuration files.
+Gnuastro's programs can find the number of threads available to your system 
internally at run-time (when you execute the program).
+However, if a value is given to the @option{--numthreads} option, the given 
number will be used, see @ref{Operating mode options} and @ref{Configuration 
files} for ways to use this option.
+Thus @option{--numthreads} is the only common option in Gnuastro's programs 
with a value that doesn't have to be specified anywhere on the command-line or 
in the configuration files.
 
 @menu
 * A note on threads::           Caution and suggestion on using threads.
@@ -9087,38 +6796,23 @@ on the command-line or in the configuration files.
 @cindex Using multiple threads
 @cindex Best use of CPU threads
 @cindex Efficient use of CPU threads
-Spinning off threads is not necessarily the most efficient way to run an
-application. Creating a new thread isn't a cheap operation for the
-operating system. It is most useful when the input data are fixed and you
-want the same operation to be done on parts of it. For example one input
-image to Crop and multiple crops from various parts of it. In this fashion,
-the image is loaded into memory once, all the crops are divided between the
-number of threads internally and each thread cuts out those parts which are
-assigned to it from the same image. On the other hand, if you have multiple
-images and you want to crop the same region(s) out of all of them, it is
-much more efficient to set @option{--numthreads=1} (so no threads spin off)
-and run Crop multiple times simultaneously, see @ref{How to run
-simultaneous operations}.
+Spinning off threads is not necessarily the most efficient way to run an 
application.
+Creating a new thread isn't a cheap operation for the operating system.
+It is most useful when the input data are fixed and you want the same 
operation to be done on parts of it.
+For example one input image to Crop and multiple crops from various parts of 
it.
+In this fashion, the image is loaded into memory once, all the crops are 
divided between the number of threads internally and each thread cuts out those 
parts which are assigned to it from the same image.
+On the other hand, if you have multiple images and you want to crop the same 
region(s) out of all of them, it is much more efficient to set 
@option{--numthreads=1} (so no threads spin off) and run Crop multiple times 
simultaneously, see @ref{How to run simultaneous operations}.
 
 @cindex Wall-clock time
-You can check the boost in speed by first running a program on one of the
-data sets with the maximum number of threads and another time (with
-everything else the same) and only using one thread. You will notice that
-the wall-clock time (reported by most programs at their end) in the former
-is longer than the latter divided by number of physical CPU cores (not
-threads) available to your operating system. Asymptotically these two times
-can be equal (most of the time they aren't). So limiting the programs to
-use only one thread and running them independently on the number of
-available threads will be more efficient.
+You can check the boost in speed by first running a program on one of the data 
sets with the maximum number of threads and another time (with everything else 
the same) and only using one thread.
+You will notice that the wall-clock time (reported by most programs at their 
end) in the former is longer than the latter divided by number of physical CPU 
cores (not threads) available to your operating system.
+Asymptotically these two times can be equal (most of the time they aren't).
+So limiting the programs to use only one thread and running them independently 
on the number of available threads will be more efficient.
 
 @cindex System Cache
 @cindex Cache, system
-Note that the operating system keeps a cache of recently processed
-data, so usually, the second time you process an identical data set
-(independent of the number of threads used), you will get faster
-results. In order to make an unbiased comparison, you have to first
-clean the system's cache with the following command between the two
-runs.
+Note that the operating system keeps a cache of recently processed data, so 
usually, the second time you process an identical data set (independent of the 
number of threads used), you will get faster results.
+In order to make an unbiased comparison, you have to first clean the system's 
cache with the following command between the two runs.
 
 @example
 $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
@@ -9129,14 +6823,10 @@ $ sync; echo 3 | sudo tee /proc/sys/vm/drop_caches
 @strong{SUMMARY: Should I use multiple threads?} Depends:
 @itemize
 @item
-If you only have @strong{one} data set (image in most cases!), then
-yes, the more threads you use (with a maximum of the number of threads
-available to your OS) the faster you will get your results.
+If you only have @strong{one} data set (image in most cases!), then yes, the 
more threads you use (with a maximum of the number of threads available to your 
OS) the faster you will get your results.
 
 @item
-If you want to run the same operation on @strong{multiple} data sets, it is
-best to set the number of threads to 1 and use Make, or GNU Parallel, as
-explained in @ref{How to run simultaneous operations}.
+If you want to run the same operation on @strong{multiple} data sets, it is 
best to set the number of threads to 1 and use Make, or GNU Parallel, as 
explained in @ref{How to run simultaneous operations}.
 @end itemize
 @end cartouche
 
@@ -9147,37 +6837,25 @@ explained in @ref{How to run simultaneous operations}.
 @node How to run simultaneous operations,  , A note on threads, Multi-threaded 
operations
 @subsection How to run simultaneous operations
 
-There are two@footnote{A third way would be to open multiple terminal
-emulator windows in your GUI, type the commands separately on each and
-press @key{Enter} once on each terminal, but this is far too frustrating,
-tedious and prone to errors. It's therefore not a realistic solution when
-tens, hundreds or thousands of operations (your research targets,
-multiplied by the operations you do on each) are to be done.} approaches to
-simultaneously execute a program: using GNU Parallel or Make (GNU Make is
-the most common implementation). The first is very useful when you only
-want to do one job multiple times and want to get back to your work without
-actually keeping the command you ran. The second is usually for more
-important operations, with lots of dependencies between the different
-products (for example a full scientific research).
+There are two@footnote{A third way would be to open multiple terminal emulator 
windows in your GUI, type the commands separately on each and press @key{Enter} 
once on each terminal, but this is far too frustrating, tedious and prone to 
errors.
+It's therefore not a realistic solution when tens, hundreds or thousands of 
operations (your research targets, multiplied by the operations you do on each) 
are to be done.} approaches to simultaneously execute a program: using GNU 
Parallel or Make (GNU Make is the most common implementation).
+The first is very useful when you only want to do one job multiple times and 
want to get back to your work without actually keeping the command you ran.
+The second is usually for more important operations, with lots of dependencies 
between the different products (for example a full scientific research).
 
 @table @asis
 
 @item GNU Parallel
 @cindex GNU Parallel
-When you only want to run multiple instances of a command on different
-threads and get on with the rest of your work, the best method is to
-use GNU parallel. Surprisingly GNU Parallel is one of the few GNU
-packages that has no Info documentation but only a Man page, see
-@ref{Info}. So to see the documentation after installing it please run
+When you only want to run multiple instances of a command on different threads 
and get on with the rest of your work, the best method is to use GNU parallel.
+Surprisingly GNU Parallel is one of the few GNU packages that has no Info 
documentation but only a Man page, see @ref{Info}.
+So to see the documentation after installing it please run
 
 @example
 $ man parallel
 @end example
 @noindent
-As an example, let's assume we want to crop a region fixed on the
-pixels (500, 600) with the default width from all the FITS images in
-the @file{./data} directory ending with @file{sci.fits} to the current
-directory. To do this, you can run:
+As an example, let's assume we want to crop a region fixed on the pixels (500, 
600) with the default width from all the FITS images in the @file{./data} 
directory ending with @file{sci.fits} to the current directory.
+To do this, you can run:
 
 @example
 $ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: \
@@ -9185,62 +6863,38 @@ $ parallel astcrop --numthreads=1 --xc=500 --yc=600 ::: 
\
 @end example
 
 @noindent
-GNU Parallel can help in many more conditions, this is one of the
-simplest, see the man page for lots of other examples. For absolute
-beginners: the backslash (@command{\}) is only a line breaker to fit
-nicely in the page. If you type the whole command in one line, you
-should remove it.
+GNU Parallel can help in many more conditions, this is one of the simplest, 
see the man page for lots of other examples.
+For absolute beginners: the backslash (@command{\}) is only a line breaker to 
fit nicely in the page.
+If you type the whole command in one line, you should remove it.
 
 @item Make
 @cindex Make
-Make is a program for building ``targets'' (e.g., files) using ``recipes''
-(a set of operations) when their known ``prerequisites'' (other files) have
-been updated. It elegantly allows you to define dependency structures for
-building your final output and updating it efficiently when the inputs
-change. It is the most common infra-structure to build software
-today.
-
-Scientific research methodology is very similar to software development:
-you start by testing a hypothesis on a small sample of objects/targets with
-a simple set of steps. As you are able to get promising results, you
-improve the method and use it on a larger, more general, sample. In the
-process, you will confront many issues that have to be corrected (bugs in
-software development jargon). Make a wonderful tool to manage this style of
-development. It has been used to make reproducible papers, for example see
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction
-pipeline} of the paper introducing @ref{NoiseChisel} (one of Gnuastro's
-programs).
+Make is a program for building ``targets'' (e.g., files) using ``recipes'' (a 
set of operations) when their known ``prerequisites'' (other files) have been 
updated.
+It elegantly allows you to define dependency structures for building your 
final output and updating it efficiently when the inputs change.
+It is the most common infra-structure to build software today.
+
+Scientific research methodology is very similar to software development: you 
start by testing a hypothesis on a small sample of objects/targets with a 
simple set of steps.
+As you are able to get promising results, you improve the method and use it on 
a larger, more general, sample.
+In the process, you will confront many issues that have to be corrected (bugs 
in software development jargon).
+Make a wonderful tool to manage this style of development.
+It has been used to make reproducible papers, for example see 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} 
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
 
 @cindex GNU Make
-GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most
-common implementation which (similar to nearly all GNU programs, comes with
-a wonderful
-manual@footnote{@url{https://www.gnu.org/software/make/manual/}}). Make is
-very basic and simple, and thus the manual is short (the most important
-parts are in the first roughly 100 pages) and easy to read/understand.
+GNU Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common 
implementation which (similar to nearly all GNU programs, comes with a 
wonderful manual@footnote{@url{https://www.gnu.org/software/make/manual/}}).
+Make is very basic and simple, and thus the manual is short (the most 
important parts are in the first roughly 100 pages) and easy to read/understand.
 
-Make comes with a @option{--jobs} (@option{-j}) option which allows you to
-specify the maximum number of jobs that can be done simultaneously. For
-example if you have 8 threads available to your operating system. You can
-run:
+Make comes with a @option{--jobs} (@option{-j}) option which allows you to 
specify the maximum number of jobs that can be done simultaneously.
+For example if you have 8 threads available to your operating system.
+You can run:
 
 @example
 $ make -j8
 @end example
 
-With this command, Make will process your @file{Makefile} and create all
-the targets (can be thousands of FITS images for example) simultaneously on
-8 threads, while fully respecting their dependencies (only building a
-file/target when its prerequisites are successfully built). Make is thus
-strongly recommended for managing scientific research where robustness,
-archiving, reproducibility and speed@footnote{Besides its multi-threaded
-capabilities, Make will only re-build those targets that depend on a change
-you have made, not the whole work. For example, if you have set the
-prerequisites properly, you can easily test the changing of a parameter on
-your paper's results without having to re-do everything (which is much
-faster). This allows you to be much more productive in easily checking
-various ideas/assumptions of the different stages of your research and thus
-produce a more robust result for your exciting science.} are important.
+With this command, Make will process your @file{Makefile} and create all the 
targets (can be thousands of FITS images for example) simultaneously on 8 
threads, while fully respecting their dependencies (only building a file/target 
when its prerequisites are successfully built).
+Make is thus strongly recommended for managing scientific research where 
robustness, archiving, reproducibility and speed@footnote{Besides its 
multi-threaded capabilities, Make will only re-build those targets that depend 
on a change you have made, not the whole work.
+For example, if you have set the prerequisites properly, you can easily test 
the changing of a parameter on your paper's results without having to re-do 
everything (which is much faster).
+This allows you to be much more productive in easily checking various 
ideas/assumptions of the different stages of your research and thus produce a 
more robust result for your exciting science.} are important.
 
 @end table
 
@@ -9253,77 +6907,48 @@ produce a more robust result for your exciting 
science.} are important.
 
 @cindex Bit
 @cindex Type
-At the lowest level, the computer stores everything in terms of @code{1} or
-@code{0}. For example, each program in Gnuastro, or each astronomical image
-you take with the telescope is actually a string of millions of these zeros
-and ones. The space required to keep a zero or one is the smallest unit of
-storage, and is known as a @emph{bit}. However, understanding and
-manipulating this string of bits is extremely hard for most
-people. Therefore, different standards are defined to package the bits into
-separate @emph{type}s with a fixed interpretation of the bits in each
-package.
+At the lowest level, the computer stores everything in terms of @code{1} or 
@code{0}.
+For example, each program in Gnuastro, or each astronomical image you take 
with the telescope is actually a string of millions of these zeros and ones.
+The space required to keep a zero or one is the smallest unit of storage, and 
is known as a @emph{bit}.
+However, understanding and manipulating this string of bits is extremely hard 
for most people.
+Therefore, different standards are defined to package the bits into separate 
@emph{type}s with a fixed interpretation of the bits in each package.
 
 @cindex Byte
 @cindex Signed integer
 @cindex Unsigned integer
 @cindex Integer, Signed
-To store numbers, the most basic standard/type is for integers
-(@mymath{..., -2, -1, 0, 1, 2, ...}).  The common integer types are 8, 16,
-32, and 64 bits wide (more bits will give larger limits). Each bit
-corresponds to a power of 2 and they are summed to create the final
-number. In the integer types, for each width there are two standards for
-reading the bits: signed and unsigned. In the `signed' convention, one bit
-is reserved for the sign (stating that the integer is positive or
-negative). The `unsigned' integers use that bit in the actual number and
-thus contain only positive numbers (starting from zero).
-
-Therefore, at the same number of bits, both signed and unsigned integers
-can allow the same number of integers, but the positive limit of the
-@code{unsigned} types is double their @code{signed} counterparts with the
-same width (at the expense of not having negative numbers). When the
-context of your work doesn't involve negative numbers (for example
-counting, where negative is not defined), it is best to use the
-@code{unsigned} types. For the full numerical range of all integer types,
-see below.
-
-Another standard of converting a given number of bits to numbers is the
-floating point standard, this standard can @emph{approximately} store any
-real number with a given precision. There are two common floating point
-types: 32-bit and 64-bit, for single and double precision floating point
-numbers respectively. The former is sufficient for data with less than 8
-significant decimal digits (most astronomical data), while the latter is
-good for less than 16 significant decimal digits. The representation of
-real numbers as bits is much more complex than integers. If you are
-interested to learn more about it, you can start with the
-@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
-
-Practically, you can use Gnuastro's Arithmetic program to convert/change
-the type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table
-program to convert a table column's data type (see @ref{Column
-arithmetic}). Conversion of a dataset's type is necessary in some
-contexts. For example the program/library, that you intend to feed the data
-into, only accepts floating point values, but you have an integer
-image/column. Another situation that conversion can be helpful is when you
-know that your data only has values that fit within @code{int8} or
-@code{uint16}. However it is currently formatted in the @code{float64}
-type.
-
-The important thing to consider is that operations involving wider,
-floating point, or signed types can be significantly slower than
-smaller-width, integer, or unsigned types respectively. Note that besides
-speed, a wider type also requires much more storage space (by 4 or 8
-times). Therefore, when you confront such situations that can be optimized
-and want to store/archive/transfer the data, it is best to use the most
-efficient type. For example if your dataset (image or table column) only
-has positive integers less than 65535, store it as an unsigned 16-bit
-integer for faster processing, faster transfer, and less storage space.
-
-The short and long names for the recognized numeric data types in Gnuastro
-are listed below. Both short and long names can be used when you want to
-specify a type. For example, as a value to the common option
-@option{--type} (see @ref{Input output options}), or in the information
-comment lines of @ref{Gnuastro text table format}. The ranges listed below
-are inclusive.
+To store numbers, the most basic standard/type is for integers (@mymath{..., 
-2, -1, 0, 1, 2, ...}).
+The common integer types are 8, 16, 32, and 64 bits wide (more bits will give 
larger limits).
+Each bit corresponds to a power of 2 and they are summed to create the final 
number.
+In the integer types, for each width there are two standards for reading the 
bits: signed and unsigned.
+In the `signed' convention, one bit is reserved for the sign (stating that the 
integer is positive or negative).
+The `unsigned' integers use that bit in the actual number and thus contain 
only positive numbers (starting from zero).
+
+Therefore, at the same number of bits, both signed and unsigned integers can 
allow the same number of integers, but the positive limit of the 
@code{unsigned} types is double their @code{signed} counterparts with the same 
width (at the expense of not having negative numbers).
+When the context of your work doesn't involve negative numbers (for example 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
+For the full numerical range of all integer types, see below.
+
+Another standard of converting a given number of bits to numbers is the 
floating point standard, this standard can @emph{approximately} store any real 
number with a given precision.
+There are two common floating point types: 32-bit and 64-bit, for single and 
double precision floating point numbers respectively.
+The former is sufficient for data with less than 8 significant decimal digits 
(most astronomical data), while the latter is good for less than 16 significant 
decimal digits.
+The representation of real numbers as bits is much more complex than integers.
+If you are interested to learn more about it, you can start with the 
@url{https://en.wikipedia.org/wiki/Floating_point, Wikipedia article}.
+
+Practically, you can use Gnuastro's Arithmetic program to convert/change the 
type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to 
convert a table column's data type (see @ref{Column arithmetic}).
+Conversion of a dataset's type is necessary in some contexts.
+For example the program/library, that you intend to feed the data into, only 
accepts floating point values, but you have an integer image/column.
+Another situation that conversion can be helpful is when you know that your 
data only has values that fit within @code{int8} or @code{uint16}.
+However it is currently formatted in the @code{float64} type.
+
+The important thing to consider is that operations involving wider, floating 
point, or signed types can be significantly slower than smaller-width, integer, 
or unsigned types respectively.
+Note that besides speed, a wider type also requires much more storage space 
(by 4 or 8 times).
+Therefore, when you confront such situations that can be optimized and want to 
store/archive/transfer the data, it is best to use the most efficient type.
+For example if your dataset (image or table column) only has positive integers 
less than 65535, store it as an unsigned 16-bit integer for faster processing, 
faster transfer, and less storage space.
+
+The short and long names for the recognized numeric data types in Gnuastro are 
listed below.
+Both short and long names can be used when you want to specify a type.
+For example, as a value to the common option @option{--type} (see @ref{Input 
output options}), or in the information comment lines of @ref{Gnuastro text 
table format}.
+The ranges listed below are inclusive.
 
 @table @code
 @item u8
@@ -9368,34 +6993,25 @@ are inclusive.
 
 @item f32
 @itemx float32
-32-bit (single-precision) floating point types. The maximum (minimum is its
-negative) possible value is
-@mymath{3.402823\times10^{38}}. Single-precision floating points can
-accurately represent a floating point number up to @mymath{\sim7.2}
-significant decimals. Given the heavy noise in astronomical data, this is
-usually more than sufficient for storing results.
+32-bit (single-precision) floating point types.
+The maximum (minimum is its negative) possible value is 
@mymath{3.402823\times10^{38}}.
+Single-precision floating points can accurately represent a floating point 
number up to @mymath{\sim7.2} significant decimals.
+Given the heavy noise in astronomical data, this is usually more than 
sufficient for storing results.
 
 @item f64
 @itemx float64
-64-bit (double-precision) floating point types. The maximum (minimum is its
-negative) possible value is @mymath{\sim10^{308}}. Double-precision
-floating points can accurately represent a floating point number
-@mymath{\sim15.9} significant decimals. This is usually good for processing
-(mixing) the data internally, for example a sum of single precision data
-(and later storing the result as @code{float32}).
+64-bit (double-precision) floating point types.
+The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
+Double-precision floating points can accurately represent a floating point 
number @mymath{\sim15.9} significant decimals.
+This is usually good for processing (mixing) the data internally, for example 
a sum of single precision data (and later storing the result as @code{float32}).
 @end table
 
 @cartouche
 @noindent
-@strong{Some file formats don't recognize all types.} For example the FITS
-standard (see @ref{Fits}) does not define @code{uint64} in binary tables or
-images. When a type is not acceptable for output into a given file format,
-the respective Gnuastro program or library will let you know and abort. On
-the command-line, you can convert the numerical type of an image, or table
-column into another type with @ref{Arithmetic} or @ref{Table}
-respectively. If you are writing your own program, you can use the
-@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see
-@ref{Copying datasets}.
+@strong{Some file formats don't recognize all types.} For example the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
+When a type is not acceptable for output into a given file format, the 
respective Gnuastro program or library will let you know and abort.
+On the command-line, you can convert the numerical type of an image, or table 
column into another type with @ref{Arithmetic} or @ref{Table} respectively.
+If you are writing your own program, you can use the 
@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see 
@ref{Copying datasets}.
 @end cartouche
 
 
@@ -9403,51 +7019,30 @@ respectively. If you are writing your own program, you 
can use the
 @node Tables, Tessellation, Numeric data types, Common program behavior
 @section Tables
 
-``A table is a collection of related data held in a structured format
-within a database. It consists of columns, and rows.'' (from
-Wikipedia). Each column in the table contains the values of one property
-and each row is a collection of properties (columns) for one target
-object. For example, let's assume you have just ran MakeCatalog (see
-@ref{MakeCatalog}) on an image to measure some properties for the labeled
-regions (which might be detected galaxies for example) in the image. For
-each labeled region (detected galaxy), there will be a @emph{row} which
-groups its measured properties as @emph{columns}, one column for each
-property. One such property can be the object's magnitude, which is the sum
-of pixels with that label, or its center can be defined as the
-light-weighted average value of those pixels. Many such properties can be
-derived from the raw pixel values and their position, see @ref{Invoking
-astmkcatalog} for a long list.
-
-As a summary, for each labeled region (or, galaxy) we have one @emph{row}
-and for each measured property we have one @emph{column}. This high-level
-structure is usually the first step for higher-level analysis, for example
-finding the stellar mass or photometric redshift from magnitudes in
-multiple colors. Thus, tables are not just outputs of programs, in fact it
-is much more common for tables to be inputs of programs. For example, to
-make a mock galaxy image, you need to feed in the properties of each galaxy
-into @ref{MakeProfiles} for it do the inverse of the process above and make
-a simulated image from a catalog, see @ref{Sufi simulates a detection}. In
-other cases, you can feed a table into @ref{Crop} and it will crop out
-regions centered on the positions within the table, see @ref{Finding
-reddest clumps and visual inspection}. So to end this relatively long
-introduction, tables play a very important role in astronomy, or generally
-all branches of data analysis.
-
-In @ref{Recognized table formats} the currently recognized table formats in
-Gnuastro are discussed. You can use any of these tables as input or ask for
-them to be built as output. The most common type of table format is a
-simple plain text file with each row on one line and columns separated by
-white space characters, this format is easy to read/write by eye/hand. To
-give it the full functionality of more specific table types like the FITS
-tables, Gnuastro has a special convention which you can use to give each
-column a name, type, unit, and comments, while still being readable by
-other plain text table readers. This convention is described in
-@ref{Gnuastro text table format}.
-
-When tables are input to a program, the program reading it needs to know
-which column(s) it should use for its desired purposes. Gnuastro's programs
-all follow a similar convention, on the way you can select columns in a
-table. They are thoroughly discussed in @ref{Selecting table columns}.
+``A table is a collection of related data held in a structured format within a 
database.
+It consists of columns, and rows.'' (from Wikipedia).
+Each column in the table contains the values of one property and each row is a 
collection of properties (columns) for one target object.
+For example, let's assume you have just ran MakeCatalog (see 
@ref{MakeCatalog}) on an image to measure some properties for the labeled 
regions (which might be detected galaxies for example) in the image.
+For each labeled region (detected galaxy), there will be a @emph{row} which 
groups its measured properties as @emph{columns}, one column for each property.
+One such property can be the object's magnitude, which is the sum of pixels 
with that label, or its center can be defined as the light-weighted average 
value of those pixels.
+Many such properties can be derived from the raw pixel values and their 
position, see @ref{Invoking astmkcatalog} for a long list.
+
+As a summary, for each labeled region (or, galaxy) we have one @emph{row} and 
for each measured property we have one @emph{column}.
+This high-level structure is usually the first step for higher-level analysis, 
for example finding the stellar mass or photometric redshift from magnitudes in 
multiple colors.
+Thus, tables are not just outputs of programs, in fact it is much more common 
for tables to be inputs of programs.
+For example, to make a mock galaxy image, you need to feed in the properties 
of each galaxy into @ref{MakeProfiles} for it do the inverse of the process 
above and make a simulated image from a catalog, see @ref{Sufi simulates a 
detection}.
+In other cases, you can feed a table into @ref{Crop} and it will crop out 
regions centered on the positions within the table, see @ref{Finding reddest 
clumps and visual inspection}.
+So to end this relatively long introduction, tables play a very important role 
in astronomy, or generally all branches of data analysis.
+
+In @ref{Recognized table formats} the currently recognized table formats in 
Gnuastro are discussed.
+You can use any of these tables as input or ask for them to be built as output.
+The most common type of table format is a simple plain text file with each row 
on one line and columns separated by white space characters, this format is 
easy to read/write by eye/hand.
+To give it the full functionality of more specific table types like the FITS 
tables, Gnuastro has a special convention which you can use to give each column 
a name, type, unit, and comments, while still being readable by other plain 
text table readers.
+This convention is described in @ref{Gnuastro text table format}.
+
+When tables are input to a program, the program reading it needs to know which 
column(s) it should use for its desired purposes.
+Gnuastro's programs all follow a similar convention, on the way you can select 
columns in a table.
+They are thoroughly discussed in @ref{Selecting table columns}.
 
 
 @menu
@@ -9459,92 +7054,57 @@ table. They are thoroughly discussed in @ref{Selecting 
table columns}.
 @node Recognized table formats, Gnuastro text table format, Tables, Tables
 @subsection Recognized table formats
 
-The list of table formats that Gnuastro can currently read from and write
-to are described below. Each has their own advantage and disadvantages, so a
-short review of the format is also provided to help you make the best
-choice based on how you want to define your input tables or later use your
-output tables.
+The list of table formats that Gnuastro can currently read from and write to 
are described below.
+Each has their own advantage and disadvantages, so a short review of the 
format is also provided to help you make the best choice based on how you want 
to define your input tables or later use your output tables.
 
 @table @asis
 
 @item Plain text table
-This is the most basic and simplest way to create, view, or edit the table
-by hand on a text editor. The other formats described below are less
-eye-friendly and have a more formal structure (for easier computer
-readability). It is fully described in @ref{Gnuastro text table format}.
+This is the most basic and simplest way to create, view, or edit the table by 
hand on a text editor.
+The other formats described below are less eye-friendly and have a more formal 
structure (for easier computer readability).
+It is fully described in @ref{Gnuastro text table format}.
 
 @cindex FITS Tables
 @cindex Tables FITS
 @cindex ASCII table, FITS
 @item FITS ASCII tables
-The FITS ASCII table extension is fully in ASCII encoding and thus easily
-readable on any text editor (assuming it is the only extension in the FITS
-file). If the FITS file also contains binary extensions (for example an
-image or binary table extensions), then there will be many hard to print
-characters. The FITS ASCII format doesn't have new line characters to
-separate rows. In the FITS ASCII table standard, each row is defined as a
-fixed number of characters (value to the @code{NAXIS1} keyword), so to
-visually inspect it properly, you would have to adjust your text editor's
-width to this value. All columns start at given character positions and
-have a fixed width (number of characters).
-
-Numbers in a FITS ASCII table are printed into ASCII format, they are not
-in binary (that the CPU uses). Hence, they can take a larger space in
-memory, loose their precision, and take longer to read into memory. If you
-are dealing with integer type columns (see @ref{Numeric data types}),
-another issue with FITS ASCII tables is that the type information for the
-column will be lost (there is only one integer type in FITS ASCII
-tables). One problem with the binary format on the other hand is that it
-isn't portable (different CPUs/compilers) have different standards for
-translating the zeros and ones. But since ASCII characters are defined on a
-byte and are well recognized, they are better for portability on those
-various systems. Gnuastro's plain text table format described below is much
-more portable and easier to read/write/interpret by humans manually.
-
-Generally, as the name implies, this format is useful for when your table
-mainly contains ASCII columns (for example file names, or
-descriptions). They can be useful when you need to include columns with
-structured ASCII information along with other extensions in one FITS
-file. In such cases, you can also consider header keywords (see
-@ref{Fits}).
+The FITS ASCII table extension is fully in ASCII encoding and thus easily 
readable on any text editor (assuming it is the only extension in the FITS 
file).
+If the FITS file also contains binary extensions (for example an image or 
binary table extensions), then there will be many hard to print characters.
+The FITS ASCII format doesn't have new line characters to separate rows.
+In the FITS ASCII table standard, each row is defined as a fixed number of 
characters (value to the @code{NAXIS1} keyword), so to visually inspect it 
properly, you would have to adjust your text editor's width to this value.
+All columns start at given character positions and have a fixed width (number 
of characters).
+
+Numbers in a FITS ASCII table are printed into ASCII format, they are not in 
binary (that the CPU uses).
+Hence, they can take a larger space in memory, loose their precision, and take 
longer to read into memory.
+If you are dealing with integer type columns (see @ref{Numeric data types}), 
another issue with FITS ASCII tables is that the type information for the 
column will be lost (there is only one integer type in FITS ASCII tables).
+One problem with the binary format on the other hand is that it isn't portable 
(different CPUs/compilers) have different standards for translating the zeros 
and ones.
+But since ASCII characters are defined on a byte and are well recognized, they 
are better for portability on those various systems.
+Gnuastro's plain text table format described below is much more portable and 
easier to read/write/interpret by humans manually.
+
+Generally, as the name implies, this format is useful for when your table 
mainly contains ASCII columns (for example file names, or descriptions).
+They can be useful when you need to include columns with structured ASCII 
information along with other extensions in one FITS file.
+In such cases, you can also consider header keywords (see @ref{Fits}).
 
 @cindex Binary table, FITS
 @item FITS binary tables
-The FITS binary table is the FITS standard's solution to the issues
-discussed with keeping numbers in ASCII format as described under the FITS
-ASCII table title above. Only columns defined as a string type (a string of
-ASCII characters) are readable in a text editor. The portability problem
-with binary formats discussed above is mostly solved thanks to the
-portability of CFITSIO (see @ref{CFITSIO}) and the very long history of the
-FITS format which has been widely used since the 1970s.
-
-In the case of most numbers, storing them in binary format is more memory
-efficient than ASCII format. For example, to store @code{-25.72034} in
-ASCII format, you need 9 bytes/characters. But if you keep this same number
-(to the approximate precision possible) as a 4-byte (32-bit) floating point
-number, you can keep/transmit it with less than half the amount of
-memory. When catalogs contain thousands/millions of rows in tens/hundreds
-of columns, this can lead to significant improvements in memory/band-width
-usage. Moreover, since the CPU does its operations in the binary formats,
-reading the table in and writing it out is also much faster than an ASCII
-table.
+The FITS binary table is the FITS standard's solution to the issues discussed 
with keeping numbers in ASCII format as described under the FITS ASCII table 
title above.
+Only columns defined as a string type (a string of ASCII characters) are 
readable in a text editor.
+The portability problem with binary formats discussed above is mostly solved 
thanks to the portability of CFITSIO (see @ref{CFITSIO}) and the very long 
history of the FITS format which has been widely used since the 1970s.
+
+In the case of most numbers, storing them in binary format is more memory 
efficient than ASCII format.
+For example, to store @code{-25.72034} in ASCII format, you need 9 
bytes/characters.
+But if you keep this same number (to the approximate precision possible) as a 
4-byte (32-bit) floating point number, you can keep/transmit it with less than 
half the amount of memory.
+When catalogs contain thousands/millions of rows in tens/hundreds of columns, 
this can lead to significant improvements in memory/band-width usage.
+Moreover, since the CPU does its operations in the binary formats, reading the 
table in and writing it out is also much faster than an ASCII table.
+
+When you are dealing with integer numbers, the compression ratio can be even 
better, for example if you know all of the values in a column are positive and 
less than @code{255}, you can use the @code{unsigned char} type which only 
takes one byte! If they are between @code{-128} and @code{127}, then you can 
use the (signed) @code{char} type.
+So if you are thoughtful about the limits of your integer columns, you can 
greatly reduce the size of your file and also the speed at which it is 
read/written.
+This can be very useful when sharing your results with collaborators or 
publishing them.
+To decrease the file size even more you can name your output as ending in 
@file{.fits.gz} so it is also compressed after creation.
+Just note that compression/decompressing is CPU intensive and can slow down 
the writing/reading of the file.
 
-When you are dealing with integer numbers, the compression ratio can be
-even better, for example if you know all of the values in a column are
-positive and less than @code{255}, you can use the @code{unsigned char}
-type which only takes one byte! If they are between @code{-128} and
-@code{127}, then you can use the (signed) @code{char} type. So if you are
-thoughtful about the limits of your integer columns, you can greatly reduce
-the size of your file and also the speed at which it is read/written. This
-can be very useful when sharing your results with collaborators or
-publishing them. To decrease the file size even more you can name your
-output as ending in @file{.fits.gz} so it is also compressed after
-creation. Just note that compression/decompressing is CPU intensive and can
-slow down the writing/reading of the file.
-
-Fortunately the FITS Binary table format also accepts ASCII strings as
-column types (along with the various numerical types). So your dataset can
-also contain non-numerical columns.
+Fortunately the FITS Binary table format also accepts ASCII strings as column 
types (along with the various numerical types).
+So your dataset can also contain non-numerical columns.
 
 @end table
 
@@ -9555,70 +7115,45 @@ also contain non-numerical columns.
 @node Gnuastro text table format, Selecting table columns, Recognized table 
formats, Tables
 @subsection Gnuastro text table format
 
-Plain text files are the most generic, portable, and easiest way to
-(manually) create, (visually) inspect, or (manually) edit a table. In this
-format, the ending of a row is defined by the new-line character (a line on
-a text editor). So when you view it on a text editor, every row will occupy
-one line. The delimiters (or characters separating the columns) are white
-space characters (space, horizontal tab, vertical tab) and a comma
-(@key{,}). The only further requirement is that all rows/lines must have
-the same number of columns.
-
-The columns don't have to be exactly under each other and the rows can be
-arbitrarily long with different lengths. For example the following contents
-in a file would be interpreted as a table with 4 columns and 2 rows, with
-each element interpreted as a @code{double} type (see @ref{Numeric data
-types}).
+Plain text files are the most generic, portable, and easiest way to (manually) 
create, (visually) inspect, or (manually) edit a table.
+In this format, the ending of a row is defined by the new-line character (a 
line on a text editor).
+So when you view it on a text editor, every row will occupy one line.
+The delimiters (or characters separating the columns) are white space 
characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
+The only further requirement is that all rows/lines must have the same number 
of columns.
+
+The columns don't have to be exactly under each other and the rows can be 
arbitrarily long with different lengths.
+For example the following contents in a file would be interpreted as a table 
with 4 columns and 2 rows, with each element interpreted as a @code{double} 
type (see @ref{Numeric data types}).
 
 @example
 1     2.234948   128   39.8923e8
 2 , 4.454        792     72.98348e7
 @end example
 
-However, the example above has no other information about the columns (it
-is just raw data, with no meta-data). To use this table, you have to
-remember what the numbers in each column represent. Also, when you want to
-select columns, you have to count their position within the table. This can
-become frustrating and prone to bad errors (getting the columns wrong)
-especially as the number of columns increase. It is also bad for sending to
-a colleague, because they will find it hard to remember/use the columns
-properly.
-
-To solve these problems in Gnuastro's programs/libraries you aren't limited
-to using the column's number, see @ref{Selecting table columns}. If the
-columns have names, units, or comments you can also select your columns
-based on searches/matches in these fields, for example see @ref{Table}.
-Also, in this manner, you can't guide the program reading the table on how
-to read the numbers. As an example, the first and third columns above can
-be read as integer types: the first column might be an ID and the third can
-be the number of pixels an object occupies in an image. So there is no need
-to read these to columns as a @code{double} type (which takes more memory,
-and is slower).
-
-In the bare-minimum example above, you also can't use strings of
-characters, for example the names of filters, or some other identifier that
-includes non-numerical characters. In the absence of any information, only
-numbers can be read robustly. Assuming we read columns with non-numerical
-characters as string, there would still be the problem that the strings
-might contain space (or any delimiter) character for some rows. So, each
-`word' in the string will be interpreted as a column and the program will
-abort with an error that the rows don't have the same number of columns.
-
-To correct for these limitations, Gnuastro defines the following convention
-for storing the table meta-data along with the raw data in one plain text
-file. The format is primarily designed for ease of reading/writing by
-eye/fingers, but is also structured enough to be read by a program.
-
-When the first non-white character in a line is @key{#}, or there are no
-non-white characters in it, then the line will not be considered as a row
-of data in the table (this is a pretty standard convention in many
-programs, and higher level languages). In the former case, the line is
-interpreted as a @emph{comment}. If the comment line starts with `@code{#
-Column N:}', then it is assumed to contain information about column
-@code{N} (a number, counting from 1). Comment lines that don't start with
-this pattern are ignored and you can use them to include any further
-information you want to store with the table in the text file. A column
-information comment is assumed to have the following format:
+However, the example above has no other information about the columns (it is 
just raw data, with no meta-data).
+To use this table, you have to remember what the numbers in each column 
represent.
+Also, when you want to select columns, you have to count their position within 
the table.
+This can become frustrating and prone to bad errors (getting the columns 
wrong) especially as the number of columns increase.
+It is also bad for sending to a colleague, because they will find it hard to 
remember/use the columns properly.
+
+To solve these problems in Gnuastro's programs/libraries you aren't limited to 
using the column's number, see @ref{Selecting table columns}.
+If the columns have names, units, or comments you can also select your columns 
based on searches/matches in these fields, for example see @ref{Table}.
+Also, in this manner, you can't guide the program reading the table on how to 
read the numbers.
+As an example, the first and third columns above can be read as integer types: 
the first column might be an ID and the third can be the number of pixels an 
object occupies in an image.
+So there is no need to read these to columns as a @code{double} type (which 
takes more memory, and is slower).
+
+In the bare-minimum example above, you also can't use strings of characters, 
for example the names of filters, or some other identifier that includes 
non-numerical characters.
+In the absence of any information, only numbers can be read robustly.
+Assuming we read columns with non-numerical characters as string, there would 
still be the problem that the strings might contain space (or any delimiter) 
character for some rows.
+So, each `word' in the string will be interpreted as a column and the program 
will abort with an error that the rows don't have the same number of columns.
+
+To correct for these limitations, Gnuastro defines the following convention 
for storing the table meta-data along with the raw data in one plain text file.
+The format is primarily designed for ease of reading/writing by eye/fingers, 
but is also structured enough to be read by a program.
+
+When the first non-white character in a line is @key{#}, or there are no 
non-white characters in it, then the line will not be considered as a row of 
data in the table (this is a pretty standard convention in many programs, and 
higher level languages).
+In the former case, the line is interpreted as a @emph{comment}.
+If the comment line starts with `@code{# Column N:}', then it is assumed to 
contain information about column @code{N} (a number, counting from 1).
+Comment lines that don't start with this pattern are ignored and you can use 
them to include any further information you want to store with the table in the 
text file.
+A column information comment is assumed to have the following format:
 
 @example
 # Column N: NAME [UNIT, TYPE, BLANK] COMMENT
@@ -9626,50 +7161,32 @@ information comment is assumed to have the following 
format:
 
 @cindex NaN
 @noindent
-Any sequence of characters between `@key{:}' and `@key{[}' will be
-interpreted as the column name (so it can contain anything except the
-`@key{[}' character). Anything between the `@key{]}' and the end of the
-line is defined as a comment. Within the brackets, anything before the
-first `@key{,}' is the units (physical units, for example km/s, or erg/s),
-anything before the second `@key{,}' is the short type identifier (see
-below, and @ref{Numeric data types}). Finally (still within the brackets),
-any non-white characters after the second `@key{,}' are interpreted as the
-blank value for that column (see @ref{Blank pixels}). Note that blank
-values will be stored in the same type as the column, not as a
-string@footnote{For floating point types, the @code{nan}, or @code{inf}
-strings (both not case-sensitive) refer to IEEE NaN (not a number) and
-infinity values respectively and will be stored as a floating point, so
-they are acceptable.}.
-
-When a formatting problem occurs (for example you have specified the wrong
-type code, see below), or the column was already given meta-data in a
-previous comment, or the column number is larger than the actual number of
-columns in the table (the non-commented or empty lines), then the comment
-information line will be ignored.
-
-When a comment information line can be used, the leading and trailing white
-space characters will be stripped from all of the elements. For example in
-this line:
+Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted 
as the column name (so it can contain anything except the `@key{[}' character).
+Anything between the `@key{]}' and the end of the line is defined as a comment.
+Within the brackets, anything before the first `@key{,}' is the units 
(physical units, for example km/s, or erg/s), anything before the second 
`@key{,}' is the short type identifier (see below, and @ref{Numeric data 
types}).
+Finally (still within the brackets), any non-white characters after the second 
`@key{,}' are interpreted as the blank value for that column (see @ref{Blank 
pixels}).
+Note that blank values will be stored in the same type as the column, not as a 
string@footnote{For floating point types, the @code{nan}, or @code{inf} strings 
(both not case-sensitive) refer to IEEE NaN (not a number) and infinity values 
respectively and will be stored as a floating point, so they are acceptable.}.
+
+When a formatting problem occurs (for example you have specified the wrong 
type code, see below), or the column was already given meta-data in a previous 
comment, or the column number is larger than the actual number of columns in 
the table (the non-commented or empty lines), then the comment information line 
will be ignored.
+
+When a comment information line can be used, the leading and trailing white 
space characters will be stripped from all of the elements.
+For example in this line:
 
 @example
 # Column 5:  column name   [km/s,    f32,-99] Redshift as speed
 @end example
 
-The @code{NAME} field will be `@code{column name}' and the @code{TYPE}
-field will be `@code{f32}'. Note how all the white space characters before
-and after strings are not used, but those in the middle remained. Also,
-white space characters aren't mandatory. Hence, in the example above, the
-@code{BLANK} field will be given the value of `@code{-99}'.
+The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field 
will be `@code{f32}'.
+Note how all the white space characters before and after strings are not used, 
but those in the middle remained.
+Also, white space characters aren't mandatory.
+Hence, in the example above, the @code{BLANK} field will be given the value of 
`@code{-99}'.
 
-Except for the column number (@code{N}), the rest of the fields are
-optional. Also, the column information comments don't have to be in
-order. In other words, the information for column @mymath{N+m}
-(@mymath{m>0}) can be given in a line before column @mymath{N}. Also, you
-don't have to specify information for all columns. Those columns that don't
-have this information will be interpreted with the default settings (like
-the case above: values are double precision floating point, and the column
-has no name, unit, or comment). So these lines are all acceptable for any
-table (the first one, with nothing but the column number is redundant):
+Except for the column number (@code{N}), the rest of the fields are optional.
+Also, the column information comments don't have to be in order.
+In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be 
given in a line before column @mymath{N}.
+Also, you don't have to specify information for all columns.
+Those columns that don't have this information will be interpreted with the 
default settings (like the case above: values are double precision floating 
point, and the column has no name, unit, or comment).
+So these lines are all acceptable for any table (the first one, with nothing 
but the column number is redundant):
 
 @example
 # Column 5:
@@ -9678,39 +7195,27 @@ table (the first one, with nothing but the column 
number is redundant):
 @end example
 
 @noindent
-The data type of the column should be specified with one of the following
-values:
+The data type of the column should be specified with one of the following 
values:
 
 @itemize
 @item
 For a numeric column, you can use any of the numeric types (and their
 recognized identifiers) described in @ref{Numeric data types}.
 @item
-`@code{strN}': for strings. The @code{N} value identifies the length of the
-string (how many characters it has). The start of the string on each row is
-the first non-delimiter character of the column that has the string
-type. The next @code{N} characters will be interpreted as a string and all
-leading and trailing white space will be removed.
-
-If the next column's characters, are closer than @code{N} characters to the
-start of the string column in that line/row, they will be considered part
-of the string column. If there is a new-line character before the ending of
-the space given to the string column (in other words, the string column is
-the last column), then reading of the string will stop, even if the
-@code{N} characters are not complete yet. See @file{tests/table/table.txt}
-for one example. Therefore, the only time you have to pay attention to the
-positioning and spaces given to the string column is when it is not the
-last column in the table.
-
-The only limitation in this format is that trailing and leading white space
-characters will be removed from the columns that are read. In most cases,
-this is the desired behavior, but if trailing and leading white-spaces are
-critically important to your analysis, define your own starting and ending
-characters and remove them after the table has been read. For example in
-the sample table below, the two `@key{|}' characters (which are arbitrary)
-will remain in the value of the second column and you can remove them
-manually later. If only one of the leading or trailing white spaces is
-important for your work, you can only use one of the `@key{|}'s.
+`@code{strN}': for strings.
+The @code{N} value identifies the length of the string (how many characters it 
has).
+The start of the string on each row is the first non-delimiter character of 
the column that has the string type.
+The next @code{N} characters will be interpreted as a string and all leading 
and trailing white space will be removed.
+
+If the next column's characters, are closer than @code{N} characters to the 
start of the string column in that line/row, they will be considered part of 
the string column.
+If there is a new-line character before the ending of the space given to the 
string column (in other words, the string column is the last column), then 
reading of the string will stop, even if the @code{N} characters are not 
complete yet.
+See @file{tests/table/table.txt} for one example.
+Therefore, the only time you have to pay attention to the positioning and 
spaces given to the string column is when it is not the last column in the 
table.
+
+The only limitation in this format is that trailing and leading white space 
characters will be removed from the columns that are read.
+In most cases, this is the desired behavior, but if trailing and leading 
white-spaces are critically important to your analysis, define your own 
starting and ending characters and remove them after the table has been read.
+For example in the sample table below, the two `@key{|}' characters (which are 
arbitrary) will remain in the value of the second column and you can remove 
them manually later.
+If only one of the leading or trailing white spaces is important for your 
work, you can only use one of the `@key{|}'s.
 
 @example
 # Column 1: ID [label, u8]
@@ -9721,88 +7226,54 @@ important for your work, you can only use one of the 
`@key{|}'s.
 
 @end itemize
 
-Note that the FITS binary table standard does not define the @code{unsigned
-int} and @code{unsigned long} types, so if you want to convert your tables
-to FITS binary tables, use other types. Also, note that in the FITS ASCII
-table, there is only one integer type (@code{long}). So if you convert a
-Gnuastro plain text table to a FITS ASCII table with the @ref{Table}
-program, the type information for integers will be lost. Conversely if
-integer types are important for you, you have to manually set them when
-reading a FITS ASCII table (for example with the Table program when
-reading/converting into a file, or with the @file{gnuastro/table.h} library
-functions when reading into memory).
+Note that the FITS binary table standard does not define the @code{unsigned 
int} and @code{unsigned long} types, so if you want to convert your tables to 
FITS binary tables, use other types.
+Also, note that in the FITS ASCII table, there is only one integer type 
(@code{long}).
+So if you convert a Gnuastro plain text table to a FITS ASCII table with the 
@ref{Table} program, the type information for integers will be lost.
+Conversely if integer types are important for you, you have to manually set 
them when reading a FITS ASCII table (for example with the Table program when 
reading/converting into a file, or with the @file{gnuastro/table.h} library 
functions when reading into memory).
 
 
 @node Selecting table columns,  , Gnuastro text table format, Tables
 @subsection Selecting table columns
 
-At the lowest level, the only defining aspect of a column in a table is its
-number, or position. But selecting columns purely by number is not very
-convenient and, especially when the tables are large it can be very
-frustrating and prone to errors. Hence, table file formats (for example see
-@ref{Recognized table formats}) have ways to store additional information
-about the columns (meta-data). Some of the most common pieces of
-information about each column are its @emph{name}, the @emph{units} of data
-in the it, and a @emph{comment} for longer/informal description of the
-column's data.
+At the lowest level, the only defining aspect of a column in a table is its 
number, or position.
+But selecting columns purely by number is not very convenient and, especially 
when the tables are large it can be very frustrating and prone to errors.
+Hence, table file formats (for example see @ref{Recognized table formats}) 
have ways to store additional information about the columns (meta-data).
+Some of the most common pieces of information about each column are its 
@emph{name}, the @emph{units} of data in the it, and a @emph{comment} for 
longer/informal description of the column's data.
 
-To facilitate research with Gnuastro, you can select columns by matching,
-or searching in these three fields, besides the low-level column number. To
-view the full list of information on the columns in the table, you can use
-the Table program (see @ref{Table}) with the command below (replace
-@file{table-file} with the filename of your table, if its FITS, you might
-also need to specify the HDU/extension which contains the table):
+To facilitate research with Gnuastro, you can select columns by matching, or 
searching in these three fields, besides the low-level column number.
+To view the full list of information on the columns in the table, you can use 
the Table program (see @ref{Table}) with the command below (replace 
@file{table-file} with the filename of your table, if its FITS, you might also 
need to specify the HDU/extension which contains the table):
 
 @example
 $ asttable --information table-file
 @end example
 
-Gnuastro's programs need the columns for different purposes, for example in
-Crop, you specify the columns containing the central coordinates of the
-crop centers with the @option{--coordcol} option (see @ref{Crop
-options}). On the other hand, in MakeProfiles, to specify the column
-containing the profile position angles, you must use the @option{--pcol}
-option (see @ref{MakeProfiles catalog}). Thus, there can be no unified
-common option name to select columns for all programs (different columns
-have different purposes). However, when the program expects a column for a
-specific context, the option names end in the @option{col} suffix like the
-examples above. These options accept values in integer (column number), or
-string (metadata match/search) format.
-
-If the value can be parsed as a positive integer, it will be seen as the
-low-level column number. Note that column counting starts from 1, so if you
-ask for column 0, the respective program will abort with an error. When the
-value can't be interpreted as an a integer number, it will be seen as a
-string of characters which will be used to match/search in the table's
-meta-data. The meta-data field which the value will be compared with can be
-selected through the @option{--searchin} option, see @ref{Input output
-options}. @option{--searchin} can take three values: @code{name},
-@code{unit}, @code{comment}. The matching will be done following this
-convention:
+Gnuastro's programs need the columns for different purposes, for example in 
Crop, you specify the columns containing the central coordinates of the crop 
centers with the @option{--coordcol} option (see @ref{Crop options}).
+On the other hand, in MakeProfiles, to specify the column containing the 
profile position angles, you must use the @option{--pcol} option (see 
@ref{MakeProfiles catalog}).
+Thus, there can be no unified common option name to select columns for all 
programs (different columns have different purposes).
+However, when the program expects a column for a specific context, the option 
names end in the @option{col} suffix like the examples above.
+These options accept values in integer (column number), or string (metadata 
match/search) format.
+
+If the value can be parsed as a positive integer, it will be seen as the 
low-level column number.
+Note that column counting starts from 1, so if you ask for column 0, the 
respective program will abort with an error.
+When the value can't be interpreted as an a integer number, it will be seen as 
a string of characters which will be used to match/search in the table's 
meta-data.
+The meta-data field which the value will be compared with can be selected 
through the @option{--searchin} option, see @ref{Input output options}.
+@option{--searchin} can take three values: @code{name}, @code{unit}, 
@code{comment}.
+The matching will be done following this convention:
 
 @itemize
 @item
-If the value is enclosed in two slashes (for example @command{-x/RA_/}, or
-@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to
-be a regular expression with the same convention as GNU AWK. GNU AWK has a
-very well written
-@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html,
-chapter} describing regular expressions, so we we will not continue
-discussing them here. Regular expressions are a very powerful tool in
-matching text and useful in many contexts. We thus strongly encourage
-reviewing this chapter for greatly improving the quality of your work in
-many cases, not just for searching column meta-data in Gnuastro.
+If the value is enclosed in two slashes (for example @command{-x/RA_/}, or 
@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a 
regular expression with the same convention as GNU AWK.
+GNU AWK has a very well written 
@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter} 
describing regular expressions, so we we will not continue discussing them here.
+Regular expressions are a very powerful tool in matching text and useful in 
many contexts.
+We thus strongly encourage reviewing this chapter for greatly improving the 
quality of your work in many cases, not just for searching column meta-data in 
Gnuastro.
 
 @item
-When the string isn't enclosed between `@key{/}'s, any column that exactly
-matches the given value in the given field will be selected.
+When the string isn't enclosed between `@key{/}'s, any column that exactly 
matches the given value in the given field will be selected.
 @end itemize
 
-Note that in both cases, you can ignore the case of alphabetic characters
-with the @option{--ignorecase} option, see @ref{Input output options}. Also, in
-both cases, multiple columns may be selected with one call to this
-function. In this case, the order of the selected columns (with one call)
-will be the same order as they appear in the table.
+Note that in both cases, you can ignore the case of alphabetic characters with 
the @option{--ignorecase} option, see @ref{Input output options}.
+Also, in both cases, multiple columns may be selected with one call to this 
function.
+In this case, the order of the selected columns (with one call) will be the 
same order as they appear in the table.
 
 
 
@@ -9811,53 +7282,30 @@ will be the same order as they appear in the table.
 @node Tessellation, Automatic output, Tables, Common program behavior
 @section Tessellation
 
-It is sometimes necessary to classify the elements in a dataset (for
-example pixels in an image) into a grid of individual, non-overlapping
-tiles. For example when background sky gradients are present in an image,
-you can define a tile grid over the image. When the tile sizes are set
-properly, the background's variation over each tile will be negligible,
-allowing you to measure (and subtract) it. In other cases (for example
-spatial domain convolution in Gnuastro, see @ref{Convolve}), it might
-simply be for speed of processing: each tile can be processed independently
-on a separate CPU thread. In the arts and mathematics, this process is
-formally known as @url{https://en.wikipedia.org/wiki/Tessellation,
-tessellation}.
-
-The size of the regular tiles (in units of data-elements, or pixels in an
-image) can be defined with the @option{--tilesize} option. It takes
-multiple numbers (separated by a comma) which will be the length along the
-respective dimension (in FORTRAN/FITS dimension order). Divisions are also
-acceptable, but must result in an integer. For example
-@option{--tilesize=30,40} can be used for an image (a 2D dataset). The
-regular tile size along the first FITS axis (horizontal when viewed in SAO
-ds9) will be 30 pixels and along the second it will be 40 pixels. Ideally,
-@option{--tilesize} should be selected such that all tiles in the image
-have exactly the same size. In other words, that the dataset length in each
-dimension is divisible by the tile size in that dimension.
-
-However, this is not always possible: the dataset can be any size and every
-pixel in it is valuable. In such cases, Gnuastro will look at the
-significance of the remainder length, if it is not significant (for example
-one or two pixels), then it will just increase the size of the first tile
-in the respective dimension and allow the rest of the tiles to have the
-required size. When the remainder is significant (for example one pixel
-less than the size along that dimension), the remainder will be added to
-one regular tile's size and the large tile will be cut in half and put in
-the two ends of the grid/tessellation. In this way, all the tiles in the
-central regions of the dataset will have the regular tile sizes and the
-tiles on the edge will be slightly larger/smaller depending on the
-remainder significance. The fraction which defines the remainder
-significance along all dimensions can be set through
-@option{--remainderfrac}.
-
-The best tile size is directly related to the spatial properties of the
-property you want to study (for example, gradient on the image). In
-practice we assume that the gradient is not present over each tile. So if
-there is a strong gradient (for example in long wavelength ground based
-images) or the image is of a crowded area where there isn't too much blank
-area, you have to choose a smaller tile size. A larger mesh will give more
-pixels and and so the scatter in the results will be less (better
-statistics).
+It is sometimes necessary to classify the elements in a dataset (for example 
pixels in an image) into a grid of individual, non-overlapping tiles.
+For example when background sky gradients are present in an image, you can 
define a tile grid over the image.
+When the tile sizes are set properly, the background's variation over each 
tile will be negligible, allowing you to measure (and subtract) it.
+In other cases (for example spatial domain convolution in Gnuastro, see 
@ref{Convolve}), it might simply be for speed of processing: each tile can be 
processed independently on a separate CPU thread.
+In the arts and mathematics, this process is formally known as 
@url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.
+
+The size of the regular tiles (in units of data-elements, or pixels in an 
image) can be defined with the @option{--tilesize} option.
+It takes multiple numbers (separated by a comma) which will be the length 
along the respective dimension (in FORTRAN/FITS dimension order).
+Divisions are also acceptable, but must result in an integer.
+For example @option{--tilesize=30,40} can be used for an image (a 2D dataset).
+The regular tile size along the first FITS axis (horizontal when viewed in SAO 
ds9) will be 30 pixels and along the second it will be 40 pixels.
+Ideally, @option{--tilesize} should be selected such that all tiles in the 
image have exactly the same size.
+In other words, that the dataset length in each dimension is divisible by the 
tile size in that dimension.
+
+However, this is not always possible: the dataset can be any size and every 
pixel in it is valuable.
+In such cases, Gnuastro will look at the significance of the remainder length, 
if it is not significant (for example one or two pixels), then it will just 
increase the size of the first tile in the respective dimension and allow the 
rest of the tiles to have the required size.
+When the remainder is significant (for example one pixel less than the size 
along that dimension), the remainder will be added to one regular tile's size 
and the large tile will be cut in half and put in the two ends of the 
grid/tessellation.
+In this way, all the tiles in the central regions of the dataset will have the 
regular tile sizes and the tiles on the edge will be slightly larger/smaller 
depending on the remainder significance.
+The fraction which defines the remainder significance along all dimensions can 
be set through @option{--remainderfrac}.
+
+The best tile size is directly related to the spatial properties of the 
property you want to study (for example, gradient on the image).
+In practice we assume that the gradient is not present over each tile.
+So if there is a strong gradient (for example in long wavelength ground based 
images) or the image is of a crowded area where there isn't too much blank 
area, you have to choose a smaller tile size.
+A larger mesh will give more pixels and and so the scatter in the results will 
be less (better statistics).
 
 @cindex CCD
 @cindex Amplifier
@@ -9865,47 +7313,31 @@ statistics).
 @cindex Subaru Telescope
 @cindex Hyper Suprime-Cam
 @cindex Hubble Space Telescope (HST)
-For raw image processing, a single tessellation/grid is not sufficient. Raw
-images are the unprocessed outputs of the camera detectors. Modern
-detectors usually have multiple readout channels each with its own
-amplifier. For example the Hubble Space Telescope Advanced Camera for
-Surveys (ACS) has four amplifiers over its full detector area dividing the
-square field of view to four smaller squares. Ground based image detectors
-are not exempt, for example each CCD of Subaru Telescope's Hyper
-Suprime-Cam camera (which has 104 CCDs) has four amplifiers, but they have
-the same height of the CCD and divide the width by four parts.
+For raw image processing, a single tessellation/grid is not sufficient.
+Raw images are the unprocessed outputs of the camera detectors.
+Modern detectors usually have multiple readout channels each with its own 
amplifier.
+For example the Hubble Space Telescope Advanced Camera for Surveys (ACS) has 
four amplifiers over its full detector area dividing the square field of view 
to four smaller squares.
+Ground based image detectors are not exempt, for example each CCD of Subaru 
Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, 
but they have the same height of the CCD and divide the width by four parts.
 
 @cindex Channel
-The bias current on each amplifier is different, and initial bias
-subtraction is not perfect. So even after subtracting the measured bias
-current, you can usually still identify the boundaries of different
-amplifiers by eye. See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an
-example. This results in the final reduced data to have non-uniform
-amplifier-shaped regions with higher or lower background flux values. Such
-systematic biases will then propagate to all subsequent measurements we do
-on the data (for example photometry and subsequent stellar mass and star
-formation rate measurements in the case of galaxies).
-
-Therefore an accurate analysis requires a two layer tessellation: the top
-layer contains larger tiles, each covering one amplifier channel. For
-clarity we'll call these larger tiles ``channels''. The number of channels
-along each dimension is defined through the @option{--numchannels}. Each
-channel is then covered by its own individual smaller tessellation (with
-tile sizes determined by the @option{--tilesize} option). This will allow
-independent analysis of two adjacent pixels from different channels if
-necessary. If the image is processed or the detector only has one
-amplifier, you can set the number of channels in both dimension to 1.
-
-The final tessellation can be inspected on the image with the
-@option{--checktiles} option that is available to all programs which use
-tessellation for localized operations. When this option is called, a FITS
-file with a @file{_tiled.fits} suffix will be created along with the
-outputs, see @ref{Automatic output}. Each pixel in this image has the
-number of the tile that covers it. If the number of channels in any
-dimension are larger than unity, you will notice that the tile IDs are
-defined such that the first channels is covered first, then the second and
-so on. For the full list of processing-related common options (including
-tessellation options), please see @ref{Processing options}.
+The bias current on each amplifier is different, and initial bias subtraction 
is not perfect.
+So even after subtracting the measured bias current, you can usually still 
identify the boundaries of different amplifiers by eye.
+See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
+This results in the final reduced data to have non-uniform amplifier-shaped 
regions with higher or lower background flux values.
+Such systematic biases will then propagate to all subsequent measurements we 
do on the data (for example photometry and subsequent stellar mass and star 
formation rate measurements in the case of galaxies).
+
+Therefore an accurate analysis requires a two layer tessellation: the top 
layer contains larger tiles, each covering one amplifier channel.
+For clarity we'll call these larger tiles ``channels''.
+The number of channels along each dimension is defined through the 
@option{--numchannels}.
+Each channel is then covered by its own individual smaller tessellation (with 
tile sizes determined by the @option{--tilesize} option).
+This will allow independent analysis of two adjacent pixels from different 
channels if necessary.
+If the image is processed or the detector only has one amplifier, you can set 
the number of channels in both dimension to 1.
+
+The final tessellation can be inspected on the image with the 
@option{--checktiles} option that is available to all programs which use 
tessellation for localized operations.
+When this option is called, a FITS file with a @file{_tiled.fits} suffix will 
be created along with the outputs, see @ref{Automatic output}.
+Each pixel in this image has the number of the tile that covers it.
+If the number of channels in any dimension are larger than unity, you will 
notice that the tile IDs are defined such that the first channels is covered 
first, then the second and so on.
+For the full list of processing-related common options (including tessellation 
options), please see @ref{Processing options}.
 
 
 
@@ -9918,38 +7350,21 @@ tessellation options), please see @ref{Processing 
options}.
 @cindex Automatic output file names
 @cindex Output file names, automatic
 @cindex Setting output file names automatically
-All the programs in Gnuastro are designed such that specifying an output
-file or directory (based on the program context) is optional. When no
-output name is explicitly given (with @option{--output}, see @ref{Input
-output options}), the programs will automatically set an output name based
-on the input name(s) and what the program does. For example when you are
-using ConvertType to save FITS image named @file{dataset.fits} to a JPEG
-image and don't specify a name for it, the JPEG output file will be name
-@file{dataset.jpg}. When the input is from the standard input (for example
-a pipe, see @ref{Standard input}), and @option{--output} isn't given, the
-output name will be the program's name (for example
-@file{converttype.jpg}).
+All the programs in Gnuastro are designed such that specifying an output file 
or directory (based on the program context) is optional.
+When no output name is explicitly given (with @option{--output}, see 
@ref{Input output options}), the programs will automatically set an output name 
based on the input name(s) and what the program does.
+For example when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and don't specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
+When the input is from the standard input (for example a pipe, see 
@ref{Standard input}), and @option{--output} isn't given, the output name will 
be the program's name (for example @file{converttype.jpg}).
 
 @vindex --keepinputdir
-Another very important part of the automatic output generation is that all
-the directory information of the input file name is stripped off of
-it. This feature can be disabled with the @option{--keepinputdir} option,
-see @ref{Input output options}. It is the default because astronomical data
-are usually very large and organized specially with special file names. In
-some cases, the user might not have write permissions in those
-directories@footnote{In fact, even if the data is stored on your own
-computer, it is advised to only grant write permissions to the super user
-or root. This way, you won't accidentally delete or modify your valuable
-data!}.
-
-Let's assume that we are working on a report and want to process the
-FITS images from two projects (ABC and DEF), which are stored in the
-sub-directories named @file{ABCproject/} and @file{DEFproject/} of our
-top data directory (@file{/mnt/data}). The following shell commands
-show how one image from the former is first converted to a JPEG image
-through ConvertType and then the objects from an image in the latter
-project are detected using NoiseChisel. The text after the @command{#}
-sign are comments (not typed!).
+Another very important part of the automatic output generation is that all the 
directory information of the input file name is stripped off of it.
+This feature can be disabled with the @option{--keepinputdir} option, see 
@ref{Input output options}.
+It is the default because astronomical data are usually very large and 
organized specially with special file names.
+In some cases, the user might not have write permissions in those 
directories@footnote{In fact, even if the data is stored on your own computer, 
it is advised to only grant write permissions to the super user or root.
+This way, you won't accidentally delete or modify your valuable data!}.
+
+Let's assume that we are working on a report and want to process the FITS 
images from two projects (ABC and DEF), which are stored in the sub-directories 
named @file{ABCproject/} and @file{DEFproject/} of our top data directory 
(@file{/mnt/data}).
+The following shell commands show how one image from the former is first 
converted to a JPEG image through ConvertType and then the objects from an 
image in the latter project are detected using NoiseChisel.
+The text after the @command{#} sign are comments (not typed!).
 
 @example
 $ pwd                                               # Current location
@@ -9978,119 +7393,81 @@ ABC01.jpg ABC02.jpg DEF01_detected.fits
 @cindex FITS
 @cindex Output FITS headers
 @cindex CFITSIO version on outputs
-The output of many of Gnuastro's programs are (or can be) FITS files. The
-FITS format has many useful features for storing scientific datasets
-(cubes, images and tables) along with a robust features for
-archivability. For more on this standard, please see @ref{Fits}.
-
-As a community convention described in @ref{Fits}, the first extension of
-all FITS files produced by Gnuastro's programs only contains the meta-data
-that is intended for the file's extension(s). For a Gnuastro program, this
-generic meta-data (that is stored as FITS keyword records) is its
-configuration when it produced this dataset: file name(s) of input(s) and
-option names, values and comments. Note that when the configuration is too
-trivial (only input filename, for example the program @ref{Table}) no
-meta-data is written in this extension.
-
-FITS keywords have the following limitations in regards to generic option
-names and values which are described below:
+The output of many of Gnuastro's programs are (or can be) FITS files.
+The FITS format has many useful features for storing scientific datasets 
(cubes, images and tables) along with a robust features for archivability.
+For more on this standard, please see @ref{Fits}.
+
+As a community convention described in @ref{Fits}, the first extension of all 
FITS files produced by Gnuastro's programs only contains the meta-data that is 
intended for the file's extension(s).
+For a Gnuastro program, this generic meta-data (that is stored as FITS keyword 
records) is its configuration when it produced this dataset: file name(s) of 
input(s) and option names, values and comments.
+Note that when the configuration is too trivial (only input filename, for 
example the program @ref{Table}) no meta-data is written in this extension.
+
+FITS keywords have the following limitations in regards to generic option 
names and values which are described below:
 
 @itemize
 @item
-If a keyword (option name) is longer than 8 characters, the first word in
-the record (80 character line) is @code{HIERARCH} which is followed by the
-keyword name.
+If a keyword (option name) is longer than 8 characters, the first word in the 
record (80 character line) is @code{HIERARCH} which is followed by the keyword 
name.
 
 @item
-Values can be at most 75 characters, but for strings, this changes to 73
-(because of the two extra @key{'} characters that are necessary). However,
-if the value is a file name, containing slash (@key{/}) characters to
-separate directories, Gnuastro will break the value into multiple keywords.
+Values can be at most 75 characters, but for strings, this changes to 73 
(because of the two extra @key{'} characters that are necessary).
+However, if the value is a file name, containing slash (@key{/}) characters to 
separate directories, Gnuastro will break the value into multiple keywords.
 
 @item
 Keyword names ignore case, therefore they are all in capital letters.
-Therefore, if you want to use Grep to inspect these keywords, use the
-@option{-i} option, like the example below.
+Therefore, if you want to use Grep to inspect these keywords, use the 
@option{-i} option, like the example below.
 
 @example
 $ astfits image_detected.fits -h0 | grep -i snquant
 @end example
 @end itemize
 
-The keywords above are classified (separated by an empty line and title) as
-a group titled ``ProgramName configuration''. This meta-data extension, as
-well as all the other extensions (which contain data), also contain have
-final group of keywords to keep the basic date and version information of
-Gnuastro, its dependencies and the pipeline that is using Gnuastro (if its
-under version control).
+The keywords above are classified (separated by an empty line and title) as a 
group titled ``ProgramName configuration''.
+This meta-data extension, as well as all the other extensions (which contain 
data), also contain have final group of keywords to keep the basic date and 
version information of Gnuastro, its dependencies and the pipeline that is 
using Gnuastro (if its under version control).
 
 @table @command
 
 @item DATE
-The creation time of the FITS file. This date is written directly by
-CFITSIO and is in UT format.
+The creation time of the FITS file.
+This date is written directly by CFITSIO and is in UT format.
 
 @item COMMIT
-Git's commit description from the running directory of Gnuastro's
-programs. If the running directory is not version controlled or
-@file{libgit2} isn't installed (see @ref{Optional dependencies}) then this
-keyword will not be present. The printed value is equivalent to the output
-of the following command:
+Git's commit description from the running directory of Gnuastro's programs.
+If the running directory is not version controlled or @file{libgit2} isn't 
installed (see @ref{Optional dependencies}) then this keyword will not be 
present.
+The printed value is equivalent to the output of the following command:
 
 @example
 git describe --dirty --always
 @end example
 
-If the running directory contains non-committed work, then the stored value
-will have a `@command{-dirty}' suffix. This can be very helpful to let you
-know that the data is not ready to be shared with collaborators or
-submitted to a journal. You should only share results that are produced
-after all your work is committed (safely stored in the version controlled
-history and thus reproducible).
-
-At first sight, version control appears to be mainly a tool for software
-developers. However progress in a scientific research is almost identical
-to progress in software development: first you have a rough idea that
-starts with handful of easy steps. But as the first results appear to be
-promising, you will have to extend, or generalize, it to make it more
-robust and work in all the situations your research covers, not just your
-first test samples. Slowly you will find wrong assumptions or bad
-implementations that need to be fixed (`bugs' in software development
-parlance). Finally, when you submit the research to your collaborators or a
-journal, many comments and suggestions will come in, and you have to
-address them.
-
-Software developers have created version control systems precisely for this
-kind of activity. Each significant moment in the project's history is
-called a ``commit'', see @ref{Version controlled source}. A snapshot of the
-project in each ``commit'' is safely stored away, so you can revert back to
-it at a later time, or check changes/progress. This way, you can be sure
-that your work is reproducible and track the progress and history. With
-version control, experimentation in the project's analysis is greatly
-facilitated, since you can easily revert back if a brainstorm test
-procedure fails.
-
-One important feature of version control is that the research result (FITS
-image, table, report or paper) can be stamped with the unique commit
-information that produced it. This information will enable you to exactly
-reproduce that same result later, even if you have made
-changes/progress. For one example of a research paper's reproduction
-pipeline, please see the
-@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline}
-of the @url{https://arxiv.org/abs/1505.01664, paper} describing
-@ref{NoiseChisel}.
+If the running directory contains non-committed work, then the stored value 
will have a `@command{-dirty}' suffix.
+This can be very helpful to let you know that the data is not ready to be 
shared with collaborators or submitted to a journal.
+You should only share results that are produced after all your work is 
committed (safely stored in the version controlled history and thus 
reproducible).
+
+At first sight, version control appears to be mainly a tool for software 
developers.
+However progress in a scientific research is almost identical to progress in 
software development: first you have a rough idea that starts with handful of 
easy steps.
+But as the first results appear to be promising, you will have to extend, or 
generalize, it to make it more robust and work in all the situations your 
research covers, not just your first test samples.
+Slowly you will find wrong assumptions or bad implementations that need to be 
fixed (`bugs' in software development parlance).
+Finally, when you submit the research to your collaborators or a journal, many 
comments and suggestions will come in, and you have to address them.
+
+Software developers have created version control systems precisely for this 
kind of activity.
+Each significant moment in the project's history is called a ``commit'', see 
@ref{Version controlled source}.
+A snapshot of the project in each ``commit'' is safely stored away, so you can 
revert back to it at a later time, or check changes/progress.
+This way, you can be sure that your work is reproducible and track the 
progress and history.
+With version control, experimentation in the project's analysis is greatly 
facilitated, since you can easily revert back if a brainstorm test procedure 
fails.
+
+One important feature of version control is that the research result (FITS 
image, table, report or paper) can be stamped with the unique commit 
information that produced it.
+This information will enable you to exactly reproduce that same result later, 
even if you have made changes/progress.
+For one example of a research paper's reproduction pipeline, please see the 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, reproduction pipeline} of 
the @url{https://arxiv.org/abs/1505.01664, paper} describing @ref{NoiseChisel}.
 
 @item CFITSIO
 The version of CFITSIO used (see @ref{CFITSIO}).
 
 @item WCSLIB
-The version of WCSLIB used (see @ref{WCSLIB}). Note that older versions of
-WCSLIB do not report the version internally. So this is only available if
-you are using more recent WCSLIB versions.
+The version of WCSLIB used (see @ref{WCSLIB}).
+Note that older versions of WCSLIB do not report the version internally.
+So this is only available if you are using more recent WCSLIB versions.
 
 @item GSL
-The version of GNU Scientific Library that was used, see @ref{GNU
-Scientific Library}.
+The version of GNU Scientific Library that was used, see @ref{GNU Scientific 
Library}.
 
 @item GNUASTRO
 The version of Gnuastro used (see @ref{Version numbering}).
@@ -10128,43 +7505,27 @@ END
 @cindex File operations
 @cindex Operations on files
 @cindex General file operations
-The most low-level and basic property of a dataset is how it is stored. To
-process, archive and transmit the data, you need a container to store it
-first. From the start of the computer age, different formats have been
-defined to store data, optimized for particular applications. One
-format/container can never be useful for all applications: the storage
-defines the application and vice-versa. In astronomy, the Flexible Image
-Transport System (FITS) standard has become the most common format of data
-storage and transmission. It has many useful features, for example multiple
-sub-containers (also known as extensions or header data units, HDUs) within
-one file, or support for tables as well as images. Each HDU can store an
-independent dataset and its corresponding meta-data. Therefore, Gnuastro
-has one program (see @ref{Fits}) specifically designed to manipulate FITS
-HDUs and the meta-data (header keywords) in each HDU.
-
-Your astronomical research does not just involve data analysis (where the
-FITS format is very useful). For example you want to demonstrate your raw
-and processed FITS images or spectra as figures within slides, reports, or
-papers. The FITS format is not defined for such applications. Thus,
-Gnuastro also comes with the ConvertType program (see @ref{ConvertType})
-which can be used to convert a FITS image to and from (where possible)
-other formats like plain text and JPEG (which allow two way conversion),
-along with EPS and PDF (which can only be created from FITS, not the other
-way round).
-
-Finally, the FITS format is not just for images, it can also store
-tables. Binary tables in particular can be very efficient in storing
-catalogs that have more than a few tens of columns and rows. However,
-unlike images (where all elements/pixels have one data type), tables
-contain multiple columns and each column can have different properties:
-independent data types (see @ref{Numeric data types}) and meta-data. In
-practice, each column can be viewed as a separate container that is grouped
-with others in the table. The only shared property of the columns in a table
-is thus the number of elements they contain. To allow easy
-inspection/manipulation of table columns, Gnuastro has the Table program
-(see @ref{Table}). It can be used to select certain table columns in a FITS
-table and see them as a human readable output on the command-line, or to
-save them into another plain text or FITS table.
+The most low-level and basic property of a dataset is how it is stored.
+To process, archive and transmit the data, you need a container to store it 
first.
+From the start of the computer age, different formats have been defined to 
store data, optimized for particular applications.
+One format/container can never be useful for all applications: the storage 
defines the application and vice-versa.
+In astronomy, the Flexible Image Transport System (FITS) standard has become 
the most common format of data storage and transmission.
+It has many useful features, for example multiple sub-containers (also known 
as extensions or header data units, HDUs) within one file, or support for 
tables as well as images.
+Each HDU can store an independent dataset and its corresponding meta-data.
+Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to 
manipulate FITS HDUs and the meta-data (header keywords) in each HDU.
+
+Your astronomical research does not just involve data analysis (where the FITS 
format is very useful).
+For example you want to demonstrate your raw and processed FITS images or 
spectra as figures within slides, reports, or papers.
+The FITS format is not defined for such applications.
+Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType}) 
which can be used to convert a FITS image to and from (where possible) other 
formats like plain text and JPEG (which allow two way conversion), along with 
EPS and PDF (which can only be created from FITS, not the other way round).
+
+Finally, the FITS format is not just for images, it can also store tables.
+Binary tables in particular can be very efficient in storing catalogs that 
have more than a few tens of columns and rows.
+However, unlike images (where all elements/pixels have one data type), tables 
contain multiple columns and each column can have different properties: 
independent data types (see @ref{Numeric data types}) and meta-data.
+In practice, each column can be viewed as a separate container that is grouped 
with others in the table.
+The only shared property of the columns in a table is thus the number of 
elements they contain.
+To allow easy inspection/manipulation of table columns, Gnuastro has the Table 
program (see @ref{Table}).
+It can be used to select certain table columns in a FITS table and see them as 
a human readable output on the command-line, or to save them into another plain 
text or FITS table.
 
 @menu
 * Fits::                        View and manipulate extensions and keywords.
@@ -10181,70 +7542,38 @@ save them into another plain text or FITS table.
 @section Fits
 
 @cindex Vatican library
-The ``Flexible Image Transport System'', or FITS, is by far the most common
-data container format in astronomy and in constant use since the
-1970s. Archiving (future usage, simplicity) has been one of the primary
-design principles of this format. In the last few decades it has proved so
-useful and robust that the Vatican Library has also chosen FITS for its
-``long-term digital preservation''
-project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
+The ``Flexible Image Transport System'', or FITS, is by far the most common 
data container format in astronomy and in constant use since the 1970s.
+Archiving (future usage, simplicity) has been one of the primary design 
principles of this format.
+In the last few decades it has proved so useful and robust that the Vatican 
Library has also chosen FITS for its ``long-term digital preservation'' 
project@footnote{@url{https://www.vaticanlibrary.va/home.php?pag=progettodigit}}.
 
 @cindex IAU, international astronomical union
-Although the full name of the standard invokes the idea that it is only for
-images, it also contains complete and robust features for tables. It
-started off in the 1970s and was formally published as a standard in 1981,
-it was adopted by the International Astronomical Union (IAU) in 1982 and an
-IAU working group to maintain its future was defined in 1988. The FITS 2.0
-and 3.0 standards were approved in 2000 and 2008 respectively, and the 4.0
-draft has also been released recently, please see the
-@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document
-webpage} for the full text of all versions. Also see the
-@url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard paper}
-for a nice introduction and history along with the full standard.
+Although the full name of the standard invokes the idea that it is only for 
images, it also contains complete and robust features for tables.
+It started off in the 1970s and was formally published as a standard in 1981, 
it was adopted by the International Astronomical Union (IAU) in 1982 and an IAU 
working group to maintain its future was defined in 1988.
+The FITS 2.0 and 3.0 standards were approved in 2000 and 2008 respectively, 
and the 4.0 draft has also been released recently, please see the 
@url{https://fits.gsfc.nasa.gov/fits_standard.html, FITS standard document 
webpage} for the full text of all versions.
+Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 
standard paper} for a nice introduction and history along with the full 
standard.
 
 @cindex Meta-data
-Many common image formats, for example a JPEG, only have one image/dataset
-per file, however one great advantage of the FITS standard is that it
-allows you to keep multiple datasets (images or tables along with their
-separate meta-data) in one file. In the FITS standard, each data + metadata
-is known as an extension, or more formally a header data unit or HDU. The
-HDUs in a file can be completely independent: you can have multiple images
-of different dimensions/sizes or tables as separate extensions in one
-file. However, while the standard doesn't impose any constraints on the
-relation between the datasets, it is strongly encouraged to group data that
-are contextually related with each other in one file. For example an image
-and the table/catalog of objects and their measured properties in that
-image. Other examples can be images of one patch of sky in different colors
-(filters), or one raw telescope image along with its calibration data
-(tables or images).
-
-As discussed above, the extensions in a FITS file can be completely
-independent. To keep some information (meta-data) about the group of
-extensions in the FITS file, the community has adopted the following
-convention: put no data in the first extension, so it is just
-meta-data. This extension can thus be used to store Meta-data regarding the
-whole file (grouping of extensions). Subsequent extensions may contain data
-along with their own separate meta-data. All of Gnuastro's programs also
-follow this convention: the main output dataset(s) are placed in the second
-(or later) extension(s). The first extension contains no data the program's
-configuration (input file name, along with all its option values) are
-stored as its meta-data, see @ref{Output FITS files}.
-
-The meta-data contain information about the data, for example which region
-of the sky an image corresponds to, the units of the data, what telescope,
-camera, and filter the data were taken with, it observation date, or the
-software that produced it and its configuration. Without the meta-data, the
-raw dataset is practically just a collection of numbers and really hard to
-understand, or connect with the real world (other datasets). It is thus
-strongly encouraged to supplement your data (at any level of processing)
-with as much meta-data about your processing/science as possible.
-
-The meta-data of a FITS file is in ASCII format, which can be easily viewed
-or edited with a text editor or on the command-line. Each meta-data element
-(known as a keyword generally) is composed of a name, value, units and
-comments (the last two are optional). For example below you can see three
-FITS meta-data keywords for specifying the world coordinate system (WCS, or
-its location in the sky) of a dataset:
+Many common image formats, for example a JPEG, only have one image/dataset per 
file, however one great advantage of the FITS standard is that it allows you to 
keep multiple datasets (images or tables along with their separate meta-data) 
in one file.
+In the FITS standard, each data + metadata is known as an extension, or more 
formally a header data unit or HDU.
+The HDUs in a file can be completely independent: you can have multiple images 
of different dimensions/sizes or tables as separate extensions in one file.
+However, while the standard doesn't impose any constraints on the relation 
between the datasets, it is strongly encouraged to group data that are 
contextually related with each other in one file.
+For example an image and the table/catalog of objects and their measured 
properties in that image.
+Other examples can be images of one patch of sky in different colors 
(filters), or one raw telescope image along with its calibration data (tables 
or images).
+
+As discussed above, the extensions in a FITS file can be completely 
independent.
+To keep some information (meta-data) about the group of extensions in the FITS 
file, the community has adopted the following convention: put no data in the 
first extension, so it is just meta-data.
+This extension can thus be used to store Meta-data regarding the whole file 
(grouping of extensions).
+Subsequent extensions may contain data along with their own separate meta-data.
+All of Gnuastro's programs also follow this convention: the main output 
dataset(s) are placed in the second (or later) extension(s).
+The first extension contains no data the program's configuration (input file 
name, along with all its option values) are stored as its meta-data, see 
@ref{Output FITS files}.
+
+The meta-data contain information about the data, for example which region of 
the sky an image corresponds to, the units of the data, what telescope, camera, 
and filter the data were taken with, it observation date, or the software that 
produced it and its configuration.
+Without the meta-data, the raw dataset is practically just a collection of 
numbers and really hard to understand, or connect with the real world (other 
datasets).
+It is thus strongly encouraged to supplement your data (at any level of 
processing) with as much meta-data about your processing/science as possible.
+
+The meta-data of a FITS file is in ASCII format, which can be easily viewed or 
edited with a text editor or on the command-line.
+Each meta-data element (known as a keyword generally) is composed of a name, 
value, units and comments (the last two are optional).
+For example below you can see three FITS meta-data keywords for specifying the 
world coordinate system (WCS, or its location in the sky) of a dataset:
 
 @example
 LATPOLE =           -27.805089 / [deg] Native latitude of celestial pole
@@ -10252,21 +7581,13 @@ RADESYS = 'FK5'                / Equatorial coordinate 
system
 EQUINOX =               2000.0 / [yr] Equinox of equatorial coordinates
 @end example
 
-However, there are some limitations which discourage viewing/editing the
-keywords with text editors. For example there is a fixed length of 80
-characters for each keyword (its name, value, units and comments) and there
-are no new-line characters, so on a text editor all the keywords are seen
-in one line. Also, the meta-data keywords are immediately followed by the
-data which are commonly in binary format and will show up as strange
-looking characters on a text editor, and significantly slowing down the
-processor.
+However, there are some limitations which discourage viewing/editing the 
keywords with text editors.
+For example there is a fixed length of 80 characters for each keyword (its 
name, value, units and comments) and there are no new-line characters, so on a 
text editor all the keywords are seen in one line.
+Also, the meta-data keywords are immediately followed by the data which are 
commonly in binary format and will show up as strange looking characters on a 
text editor, and significantly slowing down the processor.
 
-Gnuastro's Fits program was designed to allow easy manipulation of FITS
-extensions and meta-data keywords on the command-line while conforming
-fully with the FITS standard. For example you can copy or cut (copy and
-remove) HDUs/extensions from one FITS file to another, or completely delete
-them. It also has features to delete, add, or edit meta-data keywords
-within one HDU.
+Gnuastro's Fits program was designed to allow easy manipulation of FITS 
extensions and meta-data keywords on the command-line while conforming fully 
with the FITS standard.
+For example you can copy or cut (copy and remove) HDUs/extensions from one 
FITS file to another, or completely delete them.
+It also has features to delete, add, or edit meta-data keywords within one HDU.
 
 @menu
 * Invoking astfits::            Arguments and options to Header.
@@ -10275,9 +7596,8 @@ within one HDU.
 @node Invoking astfits,  , Fits, Fits
 @subsection Invoking Fits
 
-Fits can print or manipulate the FITS file HDUs (extensions), meta-data
-keywords in a given HDU. The executable name is @file{astfits} with the
-following general template
+Fits can print or manipulate the FITS file HDUs (extensions), meta-data 
keywords in a given HDU.
+The executable name is @file{astfits} with the following general template
 
 @example
 $ astfits [OPTION...] ASTRdata
@@ -10314,29 +7634,15 @@ $ astfits --write=MYKEY1,20.00,"An example keyword" 
--write=MYKEY2,fd
 @end example
 
 @cindex HDU
-When no action is requested (and only a file name is given), Fits will
-print a list of information about the extension(s) in the file. This
-information includes the HDU number, HDU name (@code{EXTNAME} keyword),
-type of data (see @ref{Numeric data types}, and the number of data elements
-it contains (size along each dimension for images and table rows and
-columns). You can use this to get a general idea of the contents of the
-FITS file and what HDU to use for further processing, either with the Fits
-program or any other Gnuastro program.
-
-Here is one example of information about a FITS file with four extensions:
-the first extension has no data, it is a purely meta-data HDU (commonly
-used to keep meta-data about the whole file, or grouping of extensions, see
-@ref{Fits}). The second extension is an image with name @code{IMAGE} and
-single precision floating point type (@code{float32}, see @ref{Numeric data
-types}), it has 4287 pixels along its first (horizontal) axis and 4286
-pixels along its second (vertical) axis. The third extension is also an
-image with name @code{MASK}. It is in 2-byte integer format (@code{int16})
-which is commonly used to keep information about pixels (for example to
-identify which ones were saturated, or which ones had cosmic rays and so
-on), note how it has the same size as the @code{IMAGE} extension. The third
-extension is a binary table called @code{CATALOG} which has 12371 rows and
-5 columns (it probably contains information about the sources in the
-image).
+When no action is requested (and only a file name is given), Fits will print a 
list of information about the extension(s) in the file.
+This information includes the HDU number, HDU name (@code{EXTNAME} keyword), 
type of data (see @ref{Numeric data types}, and the number of data elements it 
contains (size along each dimension for images and table rows and columns).
+You can use this to get a general idea of the contents of the FITS file and 
what HDU to use for further processing, either with the Fits program or any 
other Gnuastro program.
+
+Here is one example of information about a FITS file with four extensions: the 
first extension has no data, it is a purely meta-data HDU (commonly used to 
keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
+The second extension is an image with name @code{IMAGE} and single precision 
floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287 
pixels along its first (horizontal) axis and 4286 pixels along its second 
(vertical) axis.
+The third extension is also an image with name @code{MASK}.
+It is in 2-byte integer format (@code{int16}) which is commonly used to keep 
information about pixels (for example to identify which ones were saturated, or 
which ones had cosmic rays and so on), note how it has the same size as the 
@code{IMAGE} extension.
+The third extension is a binary table called @code{CATALOG} which has 12371 
rows and 5 columns (it probably contains information about the sources in the 
image).
 
 @example
 GNU Astronomy Utilities X.X
@@ -10354,26 +7660,16 @@ HDU (extension) information: `image.fits'.
 3      CATALOG         table_binary    12371x5
 @end example
 
-If a specific HDU is identified on the command-line with the @option{--hdu}
-(or @option{-h} option) and no operation requested, then the full list of
-header keywords in that HDU will be printed (as if the
-@option{--printallkeys} was called, see below). It is important to remember
-that this only occurs when @option{--hdu} is given on the command-line. The
-@option{--hdu} value given in a configuration file will only be used when a
-specific operation on keywords requested. Therefore as described in the
-paragraphs above, when no explicit call to the @option{--hdu} option is
-made on the command-line and no operation is requested (on the command-line
-or configuration files), the basic information of each HDU/extension is
-printed.
-
-The operating mode and input/output options to Fits are similar to the
-other programs and fully described in @ref{Common options}. The options
-particular to Fits can be divided into two groups: 1) those related to
-modifying HDUs or extensions (see @ref{HDU manipulation}), and 2) those
-related to viewing/modifying meta-data keywords (see @ref{Keyword
-manipulation}). These two classes of options cannot be called together in
-one run: you can either work on the extensions or meta-data keywords in any
-instance of Fits.
+If a specific HDU is identified on the command-line with the @option{--hdu} 
(or @option{-h} option) and no operation requested, then the full list of 
header keywords in that HDU will be printed (as if the @option{--printallkeys} 
was called, see below).
+It is important to remember that this only occurs when @option{--hdu} is given 
on the command-line.
+The @option{--hdu} value given in a configuration file will only be used when 
a specific operation on keywords requested.
+Therefore as described in the paragraphs above, when no explicit call to the 
@option{--hdu} option is made on the command-line and no operation is requested 
(on the command-line or configuration files), the basic information of each 
HDU/extension is printed.
+
+The operating mode and input/output options to Fits are similar to the other 
programs and fully described in @ref{Common options}.
+The options particular to Fits can be divided into two groups:
+1) those related to modifying HDUs or extensions (see @ref{HDU manipulation}), 
and
+2) those related to viewing/modifying meta-data keywords (see @ref{Keyword 
manipulation}).
+These two classes of options cannot be called together in one run: you can 
either work on the extensions or meta-data keywords in any instance of Fits.
 
 @menu
 * HDU manipulation::            Manipulate HDUs within a FITS file.
@@ -10386,39 +7682,29 @@ instance of Fits.
 
 @node HDU manipulation, Keyword manipulation, Invoking astfits, Invoking 
astfits
 @subsubsection HDU manipulation
-Each header data unit, or HDU (also known as an extension), in a FITS file
-is an independent dataset (data + meta-data). Multiple HDUs can be stored
-in one FITS file, see @ref{Fits}. The HDU modifying options to the Fits
-program are listed below.
-
-These options may be called multiple times in one run. If so, the
-extensions will be copied from the input FITS file to the output FITS file
-in the given order (on the command-line and also in configuration files,
-see @ref{Configuration file precedence}). If the separate classes are
-called together in one run of Fits, then first @option{--copy} is run (on
-all specified HDUs), followed by @option{--cut} (again on all specified
-HDUs), and then @option{--remove} (on all specified HDUs).
-
-The @option{--copy} and @option{--cut} options need an output FITS file
-(specified with the @option{--output} option). If the output file exists,
-then the specified HDU will be copied following the last extension of the
-output file (the existing HDUs in it will be untouched). Thus, after Fits
-finishes, the copied HDU will be the last HDU of the output file. If no
-output file name is given, then automatic output will be used to store the
-HDUs given to this option (see @ref{Automatic output}).
+Each header data unit, or HDU (also known as an extension), in a FITS file is 
an independent dataset (data + meta-data).
+Multiple HDUs can be stored in one FITS file, see @ref{Fits}.
+The HDU modifying options to the Fits program are listed below.
+
+These options may be called multiple times in one run.
+If so, the extensions will be copied from the input FITS file to the output 
FITS file in the given order (on the command-line and also in configuration 
files, see @ref{Configuration file precedence}).
+If the separate classes are called together in one run of Fits, then first 
@option{--copy} is run (on all specified HDUs), followed by @option{--cut} 
(again on all specified HDUs), and then @option{--remove} (on all specified 
HDUs).
+
+The @option{--copy} and @option{--cut} options need an output FITS file 
(specified with the @option{--output} option).
+If the output file exists, then the specified HDU will be copied following the 
last extension of the output file (the existing HDUs in it will be untouched).
+Thus, after Fits finishes, the copied HDU will be the last HDU of the output 
file.
+If no output file name is given, then automatic output will be used to store 
the HDUs given to this option (see @ref{Automatic output}).
 
 @table @option
 
 @item -n
 @itemx --numhdus
-Print the number of extensions/HDUs in the given file. Note that this
-option must be called alone and will only print a single number. It is thus
-useful in scripts, for example when you need to do check the number of
-extensions in a FITS file.
+Print the number of extensions/HDUs in the given file.
+Note that this option must be called alone and will only print a single number.
+It is thus useful in scripts, for example when you need to do check the number 
of extensions in a FITS file.
 
-For a complete list of basic meta-data on the extensions in a FITS file,
-don't use any of the options in this section or in @ref{Keyword
-manipulation}. For more, see @ref{Invoking astfits}.
+For a complete list of basic meta-data on the extensions in a FITS file, don't 
use any of the options in this section or in @ref{Keyword manipulation}.
+For more, see @ref{Invoking astfits}.
 
 @item -C STR
 @itemx --copy=STR
@@ -10433,51 +7719,39 @@ output file, see explanations above.
 @itemx --remove=STR
 Remove the specified HDU from the input file.
 
-The first (zero-th) HDU cannot be removed with this option. Consider using
-@option{--copy} or @option{--cut} in combination with
-@option{primaryimghdu} to not have an empty zero-th HDU. From CFITSIO: ``In
-the case of deleting the primary array (the first HDU in the file) then
-[it] will be replaced by a null primary array containing the minimum set of
-required keywords and no data.''. So in practice, any existing data (array)
-and meta-data in the first extension will be removed, but the number of
-extensions in the file won't change. This is because of the unique position
-the first FITS extension has in the FITS standard (for example it cannot be
-used to store tables).
+The first (zero-th) HDU cannot be removed with this option.
+Consider using @option{--copy} or @option{--cut} in combination with 
@option{primaryimghdu} to not have an empty zero-th HDU.
+From CFITSIO: ``In the case of deleting the primary array (the first HDU in 
the file) then [it] will be replaced by a null primary array containing the 
minimum set of required keywords and no data.''.
+So in practice, any existing data (array) and meta-data in the first extension 
will be removed, but the number of extensions in the file won't change.
+This is because of the unique position the first FITS extension has in the 
FITS standard (for example it cannot be used to store tables).
 
 @item --primaryimghdu
-Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't
-yet exist. This option is thus irrelevant if the output file already exists
-or the copied/cut extension is a FITS table.
+Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't yet 
exist.
+This option is thus irrelevant if the output file already exists or the 
copied/cut extension is a FITS table.
 
 @end table
 
 
 @node Keyword manipulation,  , HDU manipulation, Invoking astfits
 @subsubsection Keyword manipulation
-The meta-data in each header data unit, or HDU (also known as extension,
-see @ref{Fits}) is stored as ``keyword''s. Each keyword consists of a name,
-value, unit, and comments. The Fits program (see @ref{Fits}) options
-related to viewing and manipulating keywords in a FITS HDU are described
-below.
-
-To see the full list of keywords in a FITS HDU, you can use the
-@option{--printallkeys} option. If any of the keywords are to be modified,
-the headers of the input file will be changed. If you want to keep the
-original FITS file or HDU, it is easiest to create a copy first and then
-run Fits on that. In the FITS standard, keywords are always uppercase. So
-case does not matter in the input or output keyword names you specify.
-
-Most of the options can accept multiple instances in one command. For
-example you can add multiple keywords to delete by calling
-@option{--delete} multiple times, since repeated keywords are allowed, you
-can even delete the same keyword multiple times. The action of such options
-will start from the top most keyword.
-
-The precedence of operations are described below. Note that while the order
-within each class of actions is preserved, the order of individual actions
-is not. So irrespective of what order you called @option{--delete} and
-@option{--update}. First, all the delete operations are going to take
-effect then the update operations.
+The meta-data in each header data unit, or HDU (also known as extension, see 
@ref{Fits}) is stored as ``keyword''s.
+Each keyword consists of a name, value, unit, and comments.
+The Fits program (see @ref{Fits}) options related to viewing and manipulating 
keywords in a FITS HDU are described below.
+
+To see the full list of keywords in a FITS HDU, you can use the 
@option{--printallkeys} option.
+If any of the keywords are to be modified, the headers of the input file will 
be changed.
+If you want to keep the original FITS file or HDU, it is easiest to create a 
copy first and then run Fits on that.
+In the FITS standard, keywords are always uppercase.
+So case does not matter in the input or output keyword names you specify.
+
+Most of the options can accept multiple instances in one command.
+For example you can add multiple keywords to delete by calling 
@option{--delete} multiple times, since repeated keywords are allowed, you can 
even delete the same keyword multiple times.
+The action of such options will start from the top most keyword.
+
+The precedence of operations are described below.
+Note that while the order within each class of actions is preserved, the order 
of individual actions is not.
+So irrespective of what order you called @option{--delete} and 
@option{--update}.
+First, all the delete operations are going to take effect then the update 
operations.
 @enumerate
 @item
 @option{--delete}
@@ -10503,18 +7777,14 @@ effect then the update operations.
 @option{--copykeys}
 @end enumerate
 @noindent
-All possible syntax errors will be reported before the keywords are
-actually written. FITS errors during any of these actions will be reported,
-but Fits won't stop until all the operations are complete. If
-@option{--quitonerror} is called, then Fits will immediately stop upon the
-first error.
+All possible syntax errors will be reported before the keywords are actually 
written.
+FITS errors during any of these actions will be reported, but Fits won't stop 
until all the operations are complete.
+If @option{--quitonerror} is called, then Fits will immediately stop upon the 
first error.
 
 @cindex GNU Grep
-If you want to inspect only a certain set of header keywords, it is easiest
-to pipe the output of the Fits program to GNU Grep. Grep is a very powerful
-and advanced tool to search strings which is precisely made for such
-situations. For example if you only want to check the size of an image FITS
-HDU, you can run:
+If you want to inspect only a certain set of header keywords, it is easiest to 
pipe the output of the Fits program to GNU Grep.
+Grep is a very powerful and advanced tool to search strings which is precisely 
made for such situations.
+For example if you only want to check the size of an image FITS HDU, you can 
run:
 
 @example
 $ astfits input.fits | grep NAXIS
@@ -10522,14 +7792,10 @@ $ astfits input.fits | grep NAXIS
 
 @cartouche
 @noindent
-@strong{FITS STANDARD KEYWORDS:} Some header keywords are necessary
-for later operations on a FITS file, for example BITPIX or NAXIS, see
-the FITS standard for their full list. If you modify (for example
-remove or rename) such keywords, the FITS file extension might not be
-usable any more. Also be careful for the world coordinate system
-keywords, if you modify or change their values, any future world
-coordinate system (like RA and Dec) measurements on the image will
-also change.
+@strong{FITS STANDARD KEYWORDS:}
+Some header keywords are necessary for later operations on a FITS file, for 
example BITPIX or NAXIS, see the FITS standard for their full list.
+If you modify (for example remove or rename) such keywords, the FITS file 
extension might not be usable any more.
+Also be careful for the world coordinate system keywords, if you modify or 
change their values, any future world coordinate system (like RA and Dec) 
measurements on the image will also change.
 @end cartouche
 
 
@@ -10539,14 +7805,11 @@ The keyword related options to the Fits program are 
fully described below.
 
 @item -a STR
 @itemx --asis=STR
-Write @option{STR} exactly into the FITS file header with no
-modifications. If it does not conform to the FITS standards, then it might
-cause trouble, so please be very careful with this option. If you want to
-define the keyword from scratch, it is best to use the @option{--write}
-option (see below) and let CFITSIO worry about the standards. The best way
-to use this option is when you want to add a keyword from one FITS file to
-another unchanged and untouched. Below is an example of such a case that
-can be very useful sometimes (on the command-line or in scripts):
+Write @option{STR} exactly into the FITS file header with no modifications.
+If it does not conform to the FITS standards, then it might cause trouble, so 
please be very careful with this option.
+If you want to define the keyword from scratch, it is best to use the 
@option{--write} option (see below) and let CFITSIO worry about the standards.
+The best way to use this option is when you want to add a keyword from one 
FITS file to another unchanged and untouched.
+Below is an example of such a case that can be very useful sometimes (on the 
command-line or in scripts):
 
 @example
 $ key=$(astfits firstimage.fits | grep KEYWORD)
@@ -10554,43 +7817,32 @@ $ astfits --asis="$key" secondimage.fits
 @end example
 
 @cindex GNU Bash
-In particular note the double quotation signs (@key{"}) around the
-reference to the @command{key} shell variable (@command{$key}), since FITS
-keywords usually have lots of space characters, if this variable is not
-quoted, the shell will only give the first word in the full keyword to this
-option, which will definitely be a non-standard FITS keyword and will make
-it hard to work on the file afterwords. See the ``Quoting'' section of the
-GNU Bash manual for more information if your keyword has the special
-characters @key{$}, @key{`}, or @key{\}.
+In particular note the double quotation signs (@key{"}) around the reference 
to the @command{key} shell variable (@command{$key}).
+FITS keywords usually have lots of space characters, if this variable is not 
quoted, the shell will only give the first word in the full keyword to this 
option, which will definitely be a non-standard FITS keyword and will make it 
hard to work on the file afterwords.
+See the ``Quoting'' section of the GNU Bash manual for more information if 
your keyword has the special characters @key{$}, @key{`}, or @key{\}.
 
 @item -d STR
 @itemx --delete=STR
-Delete one instance of the @option{STR} keyword from the FITS
-header. Multiple instances of @option{--delete} can be given (possibly even
-for the same keyword, when its repeated in the meta-data). All keywords
-given will be removed from the headers in the same given order. If the
-keyword doesn't exist, Fits will give a warning and return with a non-zero
-value, but will not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Delete one instance of the @option{STR} keyword from the FITS header.
+Multiple instances of @option{--delete} can be given (possibly even for the 
same keyword, when its repeated in the meta-data).
+All keywords given will be removed from the headers in the same given order.
+If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -r STR
 @itemx --rename=STR
-Rename a keyword to a new value. @option{STR} contains both the existing
-and new names, which should be separated by either a comma (@key{,}) or a
-space character. Note that if you use a space character, you have to put
-the value to this option within double quotation marks (@key{"}) so the
-space character is not interpreted as an option separator. Multiple
-instances of @option{--rename} can be given in one command. The keywords
-will be renamed in the specified order. If the keyword doesn't exist, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Rename a keyword to a new value.
+@option{STR} contains both the existing and new names, which should be 
separated by either a comma (@key{,}) or a space character.
+Note that if you use a space character, you have to put the value to this 
option within double quotation marks (@key{"}) so the space character is not 
interpreted as an option separator.
+Multiple instances of @option{--rename} can be given in one command.
+The keywords will be renamed in the specified order.
+If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -u STR
 @itemx --update=STR
-Update a keyword, its value, its comments and its units in the format
-described below. If there are multiple instances of the keyword in the
-header, they will be changed from top to bottom (with multiple
-@option{--update} options).
+Update a keyword, its value, its comments and its units in the format 
described below.
+If there are multiple instances of the keyword in the header, they will be 
changed from top to bottom (with multiple @option{--update} options).
 
 @noindent
 The format of the values to this option can best be specified with an
@@ -10600,50 +7852,36 @@ example:
 --update=KEYWORD,value,"comments for this keyword",unit
 @end example
 
-If there is a writing error, Fits will give a warning and return with a
-non-zero value, but will not stop. To stop as soon as an error occurs, run
-with @option{--quitonerror}.
-
-@noindent
-The value can be any numerical or string value@footnote{Some tricky
-situations arise with values like `@command{87095e5}', if this was intended
-to be a number it will be kept in the header as @code{8709500000} and there
-is no problem. But this can also be a shortened Git commit hash. In the
-latter case, it should be treated as a string and stored as it is
-written. Commit hashes are very important in keeping the history of a file
-during your research and such values might arise without you noticing them
-in your reproduction pipeline. One solution is to use @command{git
-describe} instead of the short hash alone. A less recommended solution is
-to add a space after the commit hash and Fits will write the value as
-`@command{87095e5 }' in the header. If you later compare the strings on the
-shell, the space character will be ignored by the shell in the latter
-solution and there will be no problem.}. Other than the @code{KEYWORD}, all
-the other values are optional. To leave a given token empty, follow the
-preceding comma (@key{,}) immediately with the next. If any space character
-is present around the commas, it will be considered part of the respective
-token. So if more than one token has space characters within it, the safest
-method to specify a value to this option is to put double quotation marks
-around each individual token that needs it. Note that without double
-quotation marks, space characters will be seen as option separators and can
-lead to undefined behavior.
+If there is a writing error, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
+
+@noindent
+The value can be any numerical or string value@footnote{Some tricky situations 
arise with values like `@command{87095e5}', if this was intended to be a number 
it will be kept in the header as @code{8709500000} and there is no problem.
+But this can also be a shortened Git commit hash.
+In the latter case, it should be treated as a string and stored as it is 
written.
+Commit hashes are very important in keeping the history of a file during your 
research and such values might arise without you noticing them in your 
reproduction pipeline.
+One solution is to use @command{git describe} instead of the short hash alone.
+A less recommended solution is to add a space after the commit hash and Fits 
will write the value as `@command{87095e5 }' in the header.
+If you later compare the strings on the shell, the space character will be 
ignored by the shell in the latter solution and there will be no problem.}.
+Other than the @code{KEYWORD}, all the other values are optional.
+To leave a given token empty, follow the preceding comma (@key{,}) immediately 
with the next.
+If any space character is present around the commas, it will be considered 
part of the respective token.
+So if more than one token has space characters within it, the safest method to 
specify a value to this option is to put double quotation marks around each 
individual token that needs it.
+Note that without double quotation marks, space characters will be seen as 
option separators and can lead to undefined behavior.
 
 @item -w STR
 @itemx --write=STR
-Write a keyword to the header. For the possible value input formats,
-comments and units for the keyword, see the @option{--update} option
-above. The special names (first string) below will cause a special
-behavior:
+Write a keyword to the header.
+For the possible value input formats, comments and units for the keyword, see 
the @option{--update} option above.
+The special names (first string) below will cause a special behavior:
 
 @table @option
 
 @item /
-Write a ``title'' to the list of keywords. A title consists of one blank
-line and another which is blank for several spaces and starts with a slash
-(@key{/}). The second string given to this option is the ``title'' or
-string printed after the slash. For example with the command below you can
-add a ``title'' of `My keywords' after the existing keywords and add the
-subsequent @code{K1} and @code{K2} keywords under it (note that keyword
-names are not case sensitive).
+Write a ``title'' to the list of keywords.
+A title consists of one blank line and another which is blank for several 
spaces and starts with a slash (@key{/}).
+The second string given to this option is the ``title'' or string printed 
after the slash.
+For example with the command below you can add a ``title'' of `My keywords' 
after the existing keywords and add the subsequent @code{K1} and @code{K2} 
keywords under it (note that keyword names are not case sensitive).
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords" \
@@ -10658,19 +7896,14 @@ K2      =                 4.56 / My second keyword
 END
 @end example
 
-Adding a ``title'' before each contextually separate group of header
-keywords greatly helps in readability and visual inspection of the
-keywords. So generally, when you want to add new FITS keywords, its good
-practice to also add a title before them.
+Adding a ``title'' before each contextually separate group of header keywords 
greatly helps in readability and visual inspection of the keywords.
+So generally, when you want to add new FITS keywords, its good practice to 
also add a title before them.
 
-The reason you need to use @key{/} as the keyword name for setting a title
-is that @key{/} is the first non-white character.
+The reason you need to use @key{/} as the keyword name for setting a title is 
that @key{/} is the first non-white character.
 
-The title(s) is(are) written into the FITS with the same order that
-@option{--write} is called. Therefore in one run of the Fits program, you
-can specify many different titles (with their own keywords under them). For
-example the command below that builds on the previous example and adds
-another group of keywords named @code{A1} and @code{A2}.
+The title(s) is(are) written into the FITS with the same order that 
@option{--write} is called.
+Therefore in one run of the Fits program, you can specify many different 
titles (with their own keywords under them).
+For example the command below that builds on the previous example and adds 
another group of keywords named @code{A1} and @code{A2}.
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords"   \
@@ -10685,97 +7918,70 @@ $ astfits test.fits -h1 --write=/,"My keywords"   \
 @cindex CFITSIO
 @cindex @code{DATASUM}: FITS keyword
 @cindex @code{CHECKSUM}: FITS keyword
-When nothing is given afterwards, the header integrity
-keywords@footnote{Section 4.4.2.7 (page 15) of
-@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
-@code{DATASUM} and @code{CHECKSUM} will be written/updated. They are
-calculated and written by CFITSIO. They thus comply with the FITS standard
-4.0 that defines these keywords.
-
-If a value is given (for example
-@option{--write=checksum,my-own-checksum,"my checksum"}), then CFITSIO
-won't be called to calculate these two keywords and the value (as well as
-possible comment and unit) will be written just like any other
-keyword. This is generally not recommended, but necessary in special
-circumstances (where the checksum needs to be manually updated for
-example).
-
-@code{DATASUM} only depends on the data section of the HDU/extension, so it
-is not changed when you update the keywords. But @code{CHECKSUM} also
-depends on the header and will not be valid if you make any further changes
-to the header. This includes any further keyword modification options in
-the same call to the Fits program. Therefore it is recommended to write
-these keywords as the last keywords that are written/modified in the
-extension. You can use the @option{--verify} option (described below) to
-verify the values of these two keywords.
+When nothing is given afterwards, the header integrity 
keywords@footnote{Section 4.4.2.7 (page 15) of 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}} 
@code{DATASUM} and @code{CHECKSUM} will be written/updated.
+They are calculated and written by CFITSIO.
+They thus comply with the FITS standard 4.0 that defines these keywords.
+
+If a value is given (for example @option{--write=checksum,my-own-checksum,"my 
checksum"}), then CFITSIO won't be called to calculate these two keywords and 
the value (as well as possible comment and unit) will be written just like any 
other keyword.
+This is generally not recommended, but necessary in special circumstances 
(where the checksum needs to be manually updated for example).
+
+@code{DATASUM} only depends on the data section of the HDU/extension, so it is 
not changed when you update the keywords.
+But @code{CHECKSUM} also depends on the header and will not be valid if you 
make any further changes to the header.
+This includes any further keyword modification options in the same call to the 
Fits program.
+Therefore it is recommended to write these keywords as the last keywords that 
are written/modified in the extension.
+You can use the @option{--verify} option (described below) to verify the 
values of these two keywords.
 
 @item datasum
-Similar to @option{checksum}, but only write the @code{DATASUM} keyword
-(that doesn't depend on the header keywords, only the data).
+Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that 
doesn't depend on the header keywords, only the data).
 @end table
 
 @item -H STR
 @itemx --history STR
-Add a @code{HISTORY} keyword to the header with the given value. A new
-@code{HISTORY} keyword will be created for every instance of this
-option. If the string given to this option is longer than 70 characters, it
-will be separated into multiple keyword cards. If there is an error, Fits
-will give a warning and return with a non-zero value, but will not stop. To
-stop as soon as an error occurs, run with @option{--quitonerror}.
+Add a @code{HISTORY} keyword to the header with the given value. A new 
@code{HISTORY} keyword will be created for every instance of this option. If 
the string given to this option is longer than 70 characters, it will be 
separated into multiple keyword cards. If there is an error, Fits will give a 
warning and return with a non-zero value, but will not stop. To stop as soon as 
an error occurs, run with @option{--quitonerror}.
 
 @item -c STR
 @itemx --comment STR
-Add a @code{COMMENT} keyword to the header with the given value. Similar to
-the explanation for @option{--history} above.
+Add a @code{COMMENT} keyword to the header with the given value.
+Similar to the explanation for @option{--history} above.
 
 @item -t
 @itemx --date
-Put the current date and time in the header. If the @code{DATE} keyword
-already exists in the header, it will be updated. If there is a writing
-error, Fits will give a warning and return with a non-zero value, but will
-not stop. To stop as soon as an error occurs, run with
-@option{--quitonerror}.
+Put the current date and time in the header.
+If the @code{DATE} keyword already exists in the header, it will be updated.
+If there is a writing error, Fits will give a warning and return with a 
non-zero value, but will not stop.
+To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -p
 @itemx --printallkeys
-Print all the keywords in the specified FITS extension (HDU) on the
-command-line. If this option is called along with any of the other keyword
-editing commands, as described above, all other editing commands take
-precedence to this. Therefore, it will print the final keywords after all
-the editing has been done.
+Print all the keywords in the specified FITS extension (HDU) on the 
command-line.
+If this option is called along with any of the other keyword editing commands, 
as described above, all other editing commands take precedence to this.
+Therefore, it will print the final keywords after all the editing has been 
done.
 
 @item -v
 @itemx --verify
-Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of
-the FITS standard. See the description under the @code{checksum} (under
-@option{--write}, above) for more on these keywords.
+Verify the @code{DATASUM} and @code{CHECKSUM} data integrity keywords of the 
FITS standard.
+See the description under the @code{checksum} (under @option{--write}, above) 
for more on these keywords.
 
-This option will print @code{Verified} for both keywords if they can be
-verified. Otherwise, if they don't exist in the given HDU/extension, it
-will print @code{NOT-PRESENT}, and if they cannot be verified it will print
-@code{INCORRECT}. In the latter case (when the keyword values exist but
-can't be verified), the Fits program will also return with a failure.
+This option will print @code{Verified} for both keywords if they can be 
verified.
+Otherwise, if they don't exist in the given HDU/extension, it will print 
@code{NOT-PRESENT}, and if they cannot be verified it will print 
@code{INCORRECT}.
+In the latter case (when the keyword values exist but can't be verified), the 
Fits program will also return with a failure.
 
-By default this function will also print a short description of the
-@code{DATASUM} AND @code{CHECKSUM} keywords. You can suppress this extra
-information with @code{--quiet} option.
+By default this function will also print a short description of the 
@code{DATASUM} AND @code{CHECKSUM} keywords.
+You can suppress this extra information with @code{--quiet} option.
 
 @item --copykeys=INT:INT
 Copy the input's keyword records in the given range (inclusive) to the
 output HDU (specified with the @option{--output} and @option{--outhdu}
 options, for the filename and HDU/extension respectively).
 
-The given string to this option must be two integers separated by a colon
-(@key{:}). The first integer must be positive (counting of the keyword
-records starts from 1). The second integer may be negative (zero is not
-acceptable) or an integer larger than the first.
+The given string to this option must be two integers separated by a colon 
(@key{:}).
+The first integer must be positive (counting of the keyword records starts 
from 1).
+The second integer may be negative (zero is not acceptable) or an integer 
larger than the first.
 
-A negative second integer means counting from the end. So @code{-1} is the
-last copy-able keyword (not including the @code{END} keyword).
+A negative second integer means counting from the end.
+So @code{-1} is the last copy-able keyword (not including the @code{END} 
keyword).
 
-To see the header keywords of the input with a number before them, you can
-pipe the output of the FITS program (when it prints all the keywords in an
-extension) into the @command{cat} program like below:
+To see the header keywords of the input with a number before them, you can 
pipe the output of the FITS program (when it prints all the keywords in an 
extension) into the @command{cat} program like below:
 
 @example
 $ astfits input.fits -h1 | cat -n
@@ -10786,36 +7992,24 @@ The HDU/extension to write the output keywords of 
@option{--copykeys}.
 
 @item -Q
 @itemx --quitonerror
-Quit if any of the operations above are not successful. By default if
-an error occurs, Fits will warn the user of the faulty keyword and
-continue with the rest of actions.
+Quit if any of the operations above are not successful.
+By default if an error occurs, Fits will warn the user of the faulty keyword 
and continue with the rest of actions.
 
 @item -s STR
 @itemx --datetosec STR
 @cindex Unix epoch time
 @cindex Time, Unix epoch
 @cindex Epoch, Unix time
-Interpret the value of the given keyword in the FITS date format (most
-generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding
-Unix epoch time (number of seconds that have passed since 00:00:00
-Thursday, January 1st, 1970). The @code{Thh:mm:ss.ddd...} section
-(specifying the time of day), and also the @code{.ddd...} (specifying the
-fraction of a second) are optional. The value to this option must be the
-FITS keyword name that contains the requested date, for example
-@option{--datetosec=DATE-OBS}.
+Interpret the value of the given keyword in the FITS date format (most 
generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix 
epoch time (number of seconds that have passed since 00:00:00 Thursday, January 
1st, 1970).
+The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the 
@code{.ddd...} (specifying the fraction of a second) are optional.
+The value to this option must be the FITS keyword name that contains the 
requested date, for example @option{--datetosec=DATE-OBS}.
 
 @cindex GNU C Library
-This option can also interpret the older FITS date format
-(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to
-the year. In this case (following the GNU C Library), this option will make
-the following assumption: values 68 to 99 correspond to the years 1969 to
-1999, and values 0 to 68 as the years 2000 to 2068.
+This option can also interpret the older FITS date format 
(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the 
year.
+In this case (following the GNU C Library), this option will make the 
following assumption: values 68 to 99 correspond to the years 1969 to 1999, and 
values 0 to 68 as the years 2000 to 2068.
 
-This is a very useful option for operations on the FITS date values, for
-example sorting FITS files by their dates, or finding the time difference
-between two FITS files. The advantage of working with the Unix epoch time
-is that you don't have to worry about calendar details (for example the
-number of days in different months, or leap years, etc).
+This is a very useful option for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
+The advantage of working with the Unix epoch time is that you don't have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
 @end table
 
 
@@ -10841,63 +8035,38 @@ number of days in different months, or leap years, 
etc).
 @section Sort FITS files by night
 
 @cindex Calendar
-FITS images usually contain (several) keywords for preserving important
-dates. In particular, for lower-level data, this is usually the observation
-date and time (for example, stored in the @code{DATE-OBS} keyword
-value). When analyzing observed datasets, many calibration steps (like the
-dark, bias or flat-field), are commonly calculated on a per-observing-night
-basis.
-
-However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd})
-is based on the western (Gregorian) calendar. Dates that are stored in this
-format are complicated for automatic processing: a night starts in the
-final hours of one calendar day, and extends to the early hours of the next
-calendar day. As a result, to identify datasets from one night, we commonly
-need to search for two dates. However calendar peculiarities can make this
-identification very difficult. For example when an observation is done on
-the night separating two months (like the night starting on March 31st and
-going into April 1st), or two years (like the night starting on December
-31st 2018 and going into January 1st, 2019). To account for such
-situations, it is necessary to keep track of how many days are in a month,
-and leap years, etc.
+FITS images usually contain (several) keywords for preserving important dates.
+In particular, for lower-level data, this is usually the observation date and 
time (for example, stored in the @code{DATE-OBS} keyword value).
+When analyzing observed datasets, many calibration steps (like the dark, bias 
or flat-field), are commonly calculated on a per-observing-night basis.
+
+However, the FITS standard's date format (@code{YYYY-MM-DDThh:mm:ss.ddd}) is 
based on the western (Gregorian) calendar.
+Dates that are stored in this format are complicated for automatic processing: 
a night starts in the final hours of one calendar day, and extends to the early 
hours of the next calendar day.
+As a result, to identify datasets from one night, we commonly need to search 
for two dates.
+However calendar peculiarities can make this identification very difficult.
+For example when an observation is done on the night separating two months 
(like the night starting on March 31st and going into April 1st), or two years 
(like the night starting on December 31st 2018 and going into January 1st, 
2019).
+To account for such situations, it is necessary to keep track of how many days 
are in a month, and leap years, etc.
 
 @cindex Unix epoch time
 @cindex Time, Unix epoch
 @cindex Epoch, Unix time
-Gnuastro's @file{astscript-sort-by-night} script is created to help in such
-important scenarios. It uses @ref{Fits} to convert the FITS date format
-into the Unix epoch time (number of seconds since 00:00:00 of January 1st,
-1970), using the @option{--datetosec} option. The Unix epoch time is a
-single number (integer, if not given in sub-second precision), enabling
-easy comparison and sorting of dates after January 1st, 1970.
+Gnuastro's @file{astscript-sort-by-night} script is created to help in such 
important scenarios.
+It uses @ref{Fits} to convert the FITS date format into the Unix epoch time 
(number of seconds since 00:00:00 of January 1st, 1970), using the 
@option{--datetosec} option.
+The Unix epoch time is a single number (integer, if not given in sub-second 
precision), enabling easy comparison and sorting of dates after January 1st, 
1970.
 
-You can use this script as a basis for making a much more highly customized
-sorting script. Here are some examples
+You can use this script as a basis for making a much more highly customized 
sorting script.
+Here are some examples
 
 @itemize
 @item
-If you need to copy the files, but only need a single extension (not the
-whole file), you can add a step just before the making of the symbolic
-links, or copies, and change it to only copy a certain extension of the
-FITS file using the Fits program's @option{--copy} option, see @ref{HDU
-manipulation}.
+If you need to copy the files, but only need a single extension (not the whole 
file), you can add a step just before the making of the symbolic links, or 
copies, and change it to only copy a certain extension of the FITS file using 
the Fits program's @option{--copy} option, see @ref{HDU manipulation}.
 
 @item
-If you need to classify the files with finer detail (for example the
-purpose of the dataset), you can add a step just before the making of the
-symbolic links, or copies, to specify a file-name prefix based on other
-certain keyword values in the files. For example when the FITS files have a
-keyword to specify if the dataset is a science, bias, or flat-field
-image. You can read it and to add a @code{sci-}, @code{bias-}, or
-@code{flat-} to the created file (after the @option{--prefix})
-automatically.
-
-For example, let's assume the observing mode is stored in the hypothetical
-@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE},
-@code{SCIENCE-IMAGE} and @code{FLAT-EXP}. With the step below, you can
-generate a mode-prefix, and add it to the generated link/copy names (just
-correct the filename and extension of the first line to the script's
-variables):
+If you need to classify the files with finer detail (for example the purpose 
of the dataset), you can add a step just before the making of the symbolic 
links, or copies, to specify a file-name prefix based on other certain keyword 
values in the files.
+For example when the FITS files have a keyword to specify if the dataset is a 
science, bias, or flat-field image.
+You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the 
created file (after the @option{--prefix}) automatically.
+
+For example, let's assume the observing mode is stored in the hypothetical 
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE}, 
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
+With the step below, you can generate a mode-prefix, and add it to the 
generated link/copy names (just correct the filename and extension of the first 
line to the script's variables):
 
 @example
 modepref=$(astfits infile.fits -h1 \
@@ -10911,26 +8080,18 @@ modepref=$(astfits infile.fits -h1 \
 
 @cindex GNU AWK
 @cindex GNU Sed
-Here is a description of it. We first use @command{astfits} to print all
-the keywords in extension @code{1} of @file{infile.fits}. In the FITS
-standard, string values (that we are assuming here) are placed in single
-quotes (@key{'}) which are annoying in this context/use-case. Therefore, we
-pipe the output of @command{astfits} into @command{sed} to remove all such
-quotes (substituting them with a blank space). The result is then piped to
-AWK for giving us the final mode-prefix: with @code{$1=="MODE"}, we ask AWK
-to only consider the line where the first column is @code{MODE}. There is
-an equal sign between the key name and value, so the value is the third
-column (@code{$3} in AWK). We thus use a simple @code{if-else} structure to
-look into this value and print our custom prefix based on it. The output of
-AWK is then stored in the @code{modepref} shell variable which you can add
-to the link/copy name.
-
-With the solution above, the increment of the file counter for each night
-will be independent of the mode. If you want the counter to be
-mode-dependent, you can add a different counter for each mode and use that
-counter instead of the generic counter for each night (based on the value
-of @code{modepref}). But we'll leave the implementation of this step to you
-as an exercise.
+Here is a description of it.
+We first use @command{astfits} to print all the keywords in extension @code{1} 
of @file{infile.fits}.
+In the FITS standard, string values (that we are assuming here) are placed in 
single quotes (@key{'}) which are annoying in this context/use-case.
+Therefore, we pipe the output of @command{astfits} into @command{sed} to 
remove all such quotes (substituting them with a blank space).
+The result is then piped to AWK for giving us the final mode-prefix: with 
@code{$1=="MODE"}, we ask AWK to only consider the line where the first column 
is @code{MODE}.
+There is an equal sign between the key name and value, so the value is the 
third column (@code{$3} in AWK).
+We thus use a simple @code{if-else} structure to look into this value and 
print our custom prefix based on it.
+The output of AWK is then stored in the @code{modepref} shell variable which 
you can add to the link/copy name.
+
+With the solution above, the increment of the file counter for each night will 
be independent of the mode.
+If you want the counter to be mode-dependent, you can add a different counter 
for each mode and use that counter instead of the generic counter for each 
night (based on the value of @code{modepref}).
+But we'll leave the implementation of this step to you as an exercise.
 
 @end itemize
 
@@ -10941,10 +8102,9 @@ as an exercise.
 @node Invoking astscript-sort-by-night,  , Sort FITS files by night, Sort FITS 
files by night
 @subsection Invoking astscript-sort-by-night
 
-This installed script will read a FITS date formatted value from the given
-keyword, and classify the input FITS files into individual nights. For more
-on installed scripts please see (see @ref{Installed scripts}). This script
-can be used with the following general template:
+This installed script will read a FITS date formatted value from the given 
keyword, and classify the input FITS files into individual nights.
+For more on installed scripts please see (see @ref{Installed scripts}).
+This script can be used with the following general template:
 
 @example
 $ astscript-sort-by-night [OPTION...] FITS-files
@@ -10961,25 +8121,15 @@ $ astscript-sort-by-night --key=DATE-OBS 
/path/to/data/*.fits
 $ astscript-sort-by-night --link --prefix=img- /path/to/data/*.fits
 @end example
 
-This script will look into a HDU/extension (@option{--hdu}) for a keyword
-(@option{--key}) in the given FITS files and interpret the value as a
-date. The inputs will be separated by "night"s (9:00a.m to next day's
-8:59:59a.m, spanning two calendar days, exact hour can be set with
-@option{--hour}).
+This script will look into a HDU/extension (@option{--hdu}) for a keyword 
(@option{--key}) in the given FITS files and interpret the value as a date.
+The inputs will be separated by "night"s (9:00a.m to next day's 8:59:59a.m, 
spanning two calendar days, exact hour can be set with @option{--hour}).
 
-The default output is a list of all the input files along with the
-following two columns: night number and file number in that night (sorted
-by time). With @option{--link} a symbolic link will be made (one for each
-input) that contains the night number, and number of file in that night
-(sorted by time), see the description of @option{--link} for more. When
-@option{--copy} is used instead of a link, a copy of the inputs will be
-made instead of symbolic link.
+The default output is a list of all the input files along with the following 
two columns: night number and file number in that night (sorted by time).
+With @option{--link} a symbolic link will be made (one for each input) that 
contains the night number, and number of file in that night (sorted by time), 
see the description of @option{--link} for more.
+When @option{--copy} is used instead of a link, a copy of the inputs will be 
made instead of symbolic link.
 
-Below you can see one example where all the @file{target-*.fits} files in
-the @file{data} directory should be separated by observing night according
-to the @code{DATE-OBS} keyword value in their second extension (number
-@code{1}, recall that HDU counting starts from 0). You can see the output
-after the @code{ls} command.
+Below you can see one example where all the @file{target-*.fits} files in the 
@file{data} directory should be separated by observing night according to the 
@code{DATE-OBS} keyword value in their second extension (number @code{1}, 
recall that HDU counting starts from 0).
+You can see the output after the @code{ls} command.
 
 @example
 $ astscript-sort-by-night -pimg- -h1 -kDATE-OBS data/target-*.fits
@@ -10987,21 +8137,16 @@ $ ls
 img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
 @end example
 
-The outputs can be placed in a different (already existing) directory by
-including that directory's name in the @option{--prefix} value, for example
-@option{--prefix=sorted/img-} will put them all under the @file{sorted}
-directory.
+The outputs can be placed in a different (already existing) directory by 
including that directory's name in the @option{--prefix} value, for example 
@option{--prefix=sorted/img-} will put them all under the @file{sorted} 
directory.
 
-This script can be configured like all Gnuastro's programs (through
-command-line options, see @ref{Common options}), with some minor
-differences that are described in @ref{Installed scripts}. The particular
-options to this script are listed below:
+This script can be configured like all Gnuastro's programs (through 
command-line options, see @ref{Common options}), with some minor differences 
that are described in @ref{Installed scripts}.
+The particular options to this script are listed below:
 
 @table @option
 @item -h STR
 @itemx --hdu=STR
-The HDU/extension to use in all the given FITS files. All of the given FITS
-files must have this extension.
+The HDU/extension to use in all the given FITS files.
+All of the given FITS files must have this extension.
 
 @item -k STR
 @itemx --key=STR
@@ -11009,37 +8154,31 @@ The keyword name that contains the FITS date format to 
classify/sort by.
 
 @item -H FLT
 @itemx --hour=FLT
-The hour that defines the next ``night''. By default, all times before
-9:00a.m are considered to belong to the previous calendar night. If a
-sub-hour value is necessary, it should be given in units of hours, for
-example @option{--hour=9.5} corresponds to 9:30a.m.
+The hour that defines the next ``night''.
+By default, all times before 9:00a.m are considered to belong to the previous 
calendar night.
+If a sub-hour value is necessary, it should be given in units of hours, for 
example @option{--hour=9.5} corresponds to 9:30a.m.
 
 @cartouche
 @noindent
 @cindex Time zone
 @cindex UTC (Universal time coordinate)
 @cindex Universal time coordinate (UTC)
-@strong{Dealing with time zones:} the time that is recorded in
-@option{--key} may be in UTC (Universal Time Coordinate). However, the
-organization of the images taken during the night depends on the local
-time. It is possible to take this into account by setting the
-@option{--hour} option to the local time in UTC.
-
-For example, consider a set of images taken in Auckland (New Zealand,
-UTC+12) during different nights. If you want to classify these images by
-night, you have to know at which time (in UTC time) the Sun rises (or any
-other separator/definition of a different night). In this particular
-example, you can use @option{--hour=21}. Because in Auckland, a night
-finishes (roughly) at the local time of 9:00, which corresponds to 21:00
-UTC.
+@strong{Dealing with time zones:}
+The time that is recorded in @option{--key} may be in UTC (Universal Time 
Coordinate).
+However, the organization of the images taken during the night depends on the 
local time.
+It is possible to take this into account by setting the @option{--hour} option 
to the local time in UTC.
+
+For example, consider a set of images taken in Auckland (New Zealand, UTC+12) 
during different nights.
+If you want to classify these images by night, you have to know at which time 
(in UTC time) the Sun rises (or any other separator/definition of a different 
night).
+In this particular example, you can use @option{--hour=21}.
+Because in Auckland, a night finishes (roughly) at the local time of 9:00, 
which corresponds to 21:00 UTC.
 @end cartouche
 
 @item -l
 @itemx --link
-Create a symbolic link for each input FITS file. This option cannot be used
-with @option{--copy}. The link will have a standard name in the following
-format (variable parts are written in @code{CAPITAL} letters and described
-after it):
+Create a symbolic link for each input FITS file.
+This option cannot be used with @option{--copy}.
+The link will have a standard name in the following format (variable parts are 
written in @code{CAPITAL} letters and described after it):
 
 @example
 PnN-I.fits
@@ -11047,38 +8186,31 @@ PnN-I.fits
 
 @table @code
 @item P
-This is the value given to @option{--prefix}. By default, its value is
-@code{./} (to store the links in the directory this script was run in). See
-the description of @code{--prefix} for more.
+This is the value given to @option{--prefix}.
+By default, its value is @code{./} (to store the links in the directory this 
script was run in).
+See the description of @code{--prefix} for more.
 @item N
-This is the night-counter: starting from 1. @code{N} is just incremented by
-1 for the next night, no matter how many nights (without any dataset) there
-are between two subsequent observing nights (its just an identifier for
-each night which you can easily map to different calendar nights).
+This is the night-counter: starting from 1.
+@code{N} is just incremented by 1 for the next night, no matter how many 
nights (without any dataset) there are between two subsequent observing nights 
(its just an identifier for each night which you can easily map to different 
calendar nights).
 @item I
 File counter in that night, sorted by time.
 @end table
 
 @item -c
 @itemx --copy
-Make a copy of each input FITS file with the standard naming convention
-described in @option{--link}. With this option, instead of making a link, a
-copy is made. This option cannot be used with @option{--link}.
+Make a copy of each input FITS file with the standard naming convention 
described in @option{--link}.
+With this option, instead of making a link, a copy is made.
+This option cannot be used with @option{--link}.
 
 @item -p STR
 @itemx --prefix=STR
-Prefix to append before the night-identifier of each newly created link or
-copy. This option is thus only relevant with the @option{--copy} or
-@option{--link} options. See the description of @option{--link} for how its
-used. For example, with @option{--prefix=img-}, all the created file names
-in the current directory will start with @code{img-}, making outputs like
-@file{img-n1-1.fits} or @file{img-n3-42.fits}.
-
-@option{--prefix} can also be used to store the links/copies in another
-directory relative to the directory this script is being run (it must
-already exist). For example @code{--prefix=/path/to/processing/img-} will
-put all the links/copies in the @file{/path/to/processing} directory, and
-the files (in that directory) will all start with @file{img-}.
+Prefix to append before the night-identifier of each newly created link or 
copy.
+This option is thus only relevant with the @option{--copy} or @option{--link} 
options.
+See the description of @option{--link} for how its used.
+For example, with @option{--prefix=img-}, all the created file names in the 
current directory will start with @code{img-}, making outputs like 
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
+
+@option{--prefix} can also be used to store the links/copies in another 
directory relative to the directory this script is being run (it must already 
exist).
+For example @code{--prefix=/path/to/processing/img-} will put all the 
links/copies in the @file{/path/to/processing} directory, and the files (in 
that directory) will all start with @file{img-}.
 @end table
 
 
@@ -11106,24 +8238,18 @@ the files (in that directory) will all start with 
@file{img-}.
 @cindex Image format conversion
 @cindex Converting image formats
 @pindex @r{ConvertType (}astconvertt@r{)}
-The FITS format used in astronomy was defined mainly for archiving,
-transmission, and processing. In other situations, the data might be useful
-in other formats. For example, when you are writing a paper or report, or
-if you are making slides for a talk, you can't use a FITS image. Other
-image formats should be used. In other cases you might want your pixel
-values in a table format as plain text for input to other programs that
-don't recognize FITS. ConvertType is created for such situations. The
-various types will increase with future updates and based on need.
-
-The conversion is not only one way (from FITS to other formats), but two
-ways (except the EPS and PDF formats@footnote{Because EPS and PDF are
-vector, not raster/pixelated formats}). So you can also convert a JPEG
-image or text file into a FITS image. Basically, other than EPS/PDF, you
-can use any of the recognized formats as different color channel inputs to
-get any of the recognized outputs. So before explaining the options and
-arguments (in @ref{Invoking astconvertt}), we'll start with a short
-description of the recognized files types in @ref{Recognized file formats},
-followed a short introduction to digital color in @ref{Color}.
+The FITS format used in astronomy was defined mainly for archiving, 
transmission, and processing.
+In other situations, the data might be useful in other formats.
+For example, when you are writing a paper or report, or if you are making 
slides for a talk, you can't use a FITS image.
+Other image formats should be used.
+In other cases you might want your pixel values in a table format as plain 
text for input to other programs that don't recognize FITS.
+ConvertType is created for such situations.
+The various types will increase with future updates and based on need.
+
+The conversion is not only one way (from FITS to other formats), but two ways 
(except the EPS and PDF formats@footnote{Because EPS and PDF are vector, not 
raster/pixelated formats}).
+So you can also convert a JPEG image or text file into a FITS image.
+Basically, other than EPS/PDF, you can use any of the recognized formats as 
different color channel inputs to get any of the recognized outputs.
+So before explaining the options and arguments (in @ref{Invoking 
astconvertt}), we'll start with a short description of the recognized files 
types in @ref{Recognized file formats}, followed a short introduction to 
digital color in @ref{Color}.
 
 @menu
 * Recognized file formats::     Recognized file formats
@@ -11134,59 +8260,42 @@ followed a short introduction to digital color in 
@ref{Color}.
 @node Recognized file formats, Color, ConvertType, ConvertType
 @subsection Recognized file formats
 
-The various standards and the file name extensions recognized by
-ConvertType are listed below. Currently Gnuastro uses the file name's
-suffix to identify the format.
+The various standards and the file name extensions recognized by ConvertType 
are listed below.
+Currently Gnuastro uses the file name's suffix to identify the format.
 
 @table @asis
 @item FITS or IMH
 @cindex IRAF
 @cindex Astronomical data format
-Astronomical data are commonly stored in the FITS format (or the older data
-IRAF @file{.imh} format), a list of file name suffixes which indicate that
-the file is in this format is given in @ref{Arguments}.
+Astronomical data are commonly stored in the FITS format (or the older data 
IRAF @file{.imh} format), a list of file name suffixes which indicate that the 
file is in this format is given in @ref{Arguments}.
 
-Each image extension of a FITS file only has one value per
-pixel/element. Therefore, when used as input, each input FITS image
-contributes as one color channel. If you want multiple extensions in one
-FITS file for different color channels, you have to repeat the file name
-multiple times and use the @option{--hdu}, @option{--hdu2}, @option{--hdu3}
-or @option{--hdu4} options to specify the different extensions.
+Each image extension of a FITS file only has one value per pixel/element.
+Therefore, when used as input, each input FITS image contributes as one color 
channel.
+If you want multiple extensions in one FITS file for different color channels, 
you have to repeat the file name multiple times and use the @option{--hdu}, 
@option{--hdu2}, @option{--hdu3} or @option{--hdu4} options to specify the 
different extensions.
 
 @item JPEG
 @cindex JPEG format
 @cindex Raster graphics
 @cindex Pixelated graphics
-The JPEG standard was created by the Joint photographic experts
-group. It is currently one of the most commonly used image
-formats. Its major advantage is the compression algorithm that is
-defined by the standard. Like the FITS standard, this is a raster
-graphics format, which means that it is pixelated.
-
-A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK)
-color channels. If you only want to convert one JPEG image into other
-formats, there is no problem, however, if you want to use it in
-combination with other input files, make sure that the final number of
-color channels does not exceed four. If it does, then ConvertType will
-abort and notify you.
+The JPEG standard was created by the Joint photographic experts group.
+It is currently one of the most commonly used image formats.
+Its major advantage is the compression algorithm that is defined by the 
standard.
+Like the FITS standard, this is a raster graphics format, which means that it 
is pixelated.
+
+A JPEG file can have 1 (for gray-scale), 3 (for RGB) and 4 (for CMYK) color 
channels.
+If you only want to convert one JPEG image into other formats, there is no 
problem, however, if you want to use it in combination with other input files, 
make sure that the final number of color channels does not exceed four.
+If it does, then ConvertType will abort and notify you.
 
 @cindex Suffixes, JPEG images
-The file name endings that are recognized as a JPEG file for input
-are: @file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG},
-@file{.jpe}, @file{.jif}, @file{.jfif} and @file{.jfi}.
+The file name endings that are recognized as a JPEG file for input are:
+@file{.jpg}, @file{.JPG}, @file{.jpeg}, @file{.JPEG}, @file{.jpe}, 
@file{.jif}, @file{.jfif} and @file{.jfi}.
 
 @item TIFF
 @cindex TIFF format
-TIFF (or Tagged Image File Format) was originally designed as a common
-format for scanners in the early 90s and since then it has grown to become
-very general. In many aspects, the TIFF standard is similar to the FITS
-image standard: it can allow data of many types (see @ref{Numeric data
-types}), and also allows multiple images to be stored in a single file
-(each image in the file is called a `directory' in the TIFF
-standard). However, unlike FITS, it can only store images, it has no
-constructs for tables. Another (inconvenient) difference with the FITS
-standard is that keyword names are stored as numbers, not human-readable
-text.
+TIFF (or Tagged Image File Format) was originally designed as a common format 
for scanners in the early 90s and since then it has grown to become very 
general.
+In many aspects, the TIFF standard is similar to the FITS image standard: it 
can allow data of many types (see @ref{Numeric data types}), and also allows 
multiple images to be stored in a single file (each image in the file is called 
a `directory' in the TIFF standard).
+However, unlike FITS, it can only store images, it has no constructs for 
tables.
+Another (inconvenient) difference with the FITS standard is that keyword names 
are stored as numbers, not human-readable text.
 
 However, outside of astronomy, because of its support of different numeric
 data types, many fields use TIFF images for accurate (for example 16-bit
@@ -11200,13 +8309,11 @@ writing TIFF images, please get in touch with us.
 @cindex PostScript
 @cindex Vector graphics
 @cindex Encapsulated PostScript
-The Encapsulated PostScript (EPS) format is essentially a one page
-PostScript file which has a specified size. PostScript also includes
-non-image data, for example lines and texts. It is a fully functional
-programming language to describe a document. Therefore in ConvertType,
-EPS is only an output format and cannot be used as input. Contrary to
-the FITS or JPEG formats, PostScript is not a raster format, but is
-categorized as vector graphics.
+The Encapsulated PostScript (EPS) format is essentially a one page PostScript 
file which has a specified size.
+PostScript also includes non-image data, for example lines and texts.
+It is a fully functional programming language to describe a document.
+Therefore in ConvertType, EPS is only an output format and cannot be used as 
input.
+Contrary to the FITS or JPEG formats, PostScript is not a raster format, but 
is categorized as vector graphics.
 
 @cindex PDF
 @cindex Adobe systems
@@ -11214,106 +8321,71 @@ categorized as vector graphics.
 @cindex Compiled PostScript
 @cindex Portable Document format
 @cindex Static document description format
-The Portable Document Format (PDF) is currently the most common format
-for documents. Some believe that PDF has replaced PostScript and that
-PostScript is now obsolete. This view is wrong, a PostScript file is
-an actual plain text file that can be edited like any program source
-with any text editor. To be able to display its programmed content or
-print, it needs to pass through a processor or compiler. A PDF file
-can be thought of as the processed output of the compiler on an input
-PostScript file. PostScript, EPS and PDF were created and are
-registered by Adobe Systems.
+The Portable Document Format (PDF) is currently the most common format for 
documents.
+Some believe that PDF has replaced PostScript and that PostScript is now 
obsolete.
+This view is wrong, a PostScript file is an actual plain text file that can be 
edited like any program source with any text editor.
+To be able to display its programmed content or print, it needs to pass 
through a processor or compiler.
+A PDF file can be thought of as the processed output of the compiler on an 
input PostScript file.
+PostScript, EPS and PDF were created and are registered by Adobe Systems.
 
 @cindex @TeX{}
 @cindex @LaTeX{}
-With these features in mind, you can see that when you are compiling a
-document with @TeX{} or @LaTeX{}, using an EPS file is much more low
-level than a JPEG and thus you have much greater control and therefore
-quality. Since it also includes vector graphic lines we also use such
-lines to make a thin border around the image to make its appearance in
-the document much better. No matter the resolution of the display or
-printer, these lines will always be clear and not pixelated. In the
-future, addition of text might be included (for example labels or
-object IDs) on the EPS output. However, this can be done better with
-tools within @TeX{} or @LaTeX{} such as
-PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
+With these features in mind, you can see that when you are compiling a 
document with @TeX{} or @LaTeX{}, using an EPS file is much more low level than 
a JPEG and thus you have much greater control and therefore quality.
+Since it also includes vector graphic lines we also use such lines to make a 
thin border around the image to make its appearance in the document much better.
+No matter the resolution of the display or printer, these lines will always be 
clear and not pixelated.
+In the future, addition of text might be included (for example labels or 
object IDs) on the EPS output.
+However, this can be done better with tools within @TeX{} or @LaTeX{} such as 
PGF/Tikz@footnote{@url{http://sourceforge.net/projects/pgf/}}.
 
 @cindex Binary image
 @cindex Saving binary image
 @cindex Black and white image
-If the final input image (possibly after all operations on the flux
-explained below) is a binary image or only has two colors of black and
-white (in segmentation maps for example), then PostScript has another
-great advantage compared to other formats. It allows for 1 bit pixels
-(pixels with a value of 0 or 1), this can decrease the output file
-size by 8 times. So if a gray-scale image is binary, ConvertType will
-exploit this property in the EPS and PDF (see below) outputs.
+If the final input image (possibly after all operations on the flux explained 
below) is a binary image or only has two colors of black and white (in 
segmentation maps for example), then PostScript has another great advantage 
compared to other formats.
+It allows for 1 bit pixels (pixels with a value of 0 or 1), this can decrease 
the output file size by 8 times.
+So if a gray-scale image is binary, ConvertType will exploit this property in 
the EPS and PDF (see below) outputs.
 
 @cindex Suffixes, EPS format
-The standard formats for an EPS file are @file{.eps}, @file{.EPS},
-@file{.epsf} and @file{.epsi}. The EPS outputs of ConvertType have the
-@file{.eps} suffix.
+The standard formats for an EPS file are @file{.eps}, @file{.EPS}, 
@file{.epsf} and @file{.epsi}.
+The EPS outputs of ConvertType have the @file{.eps} suffix.
 
 @item PDF
 @cindex Suffixes, PDF format
 @cindex GPL Ghostscript
-As explained above, a PDF document is a static document description
-format, viewing its result is therefore much faster and more efficient
-than PostScript. To create a PDF output, ConvertType will make a
-PostScript page description and convert that to PDF using GPL
-Ghostscript. The suffixes recognized for a PDF file are: @file{.pdf},
-@file{.PDF}. If GPL Ghostscript cannot be run on the PostScript file,
-it will remain and a warning will be printed.
+As explained above, a PDF document is a static document description format, 
viewing its result is therefore much faster and more efficient than PostScript.
+To create a PDF output, ConvertType will make a PostScript page description 
and convert that to PDF using GPL Ghostscript.
+The suffixes recognized for a PDF file are: @file{.pdf}, @file{.PDF}.
+If GPL Ghostscript cannot be run on the PostScript file, it will remain and a 
warning will be printed.
 
 @item @option{blank}
 @cindex @file{blank} color channel
-This is not actually a file type! But can be used to fill one color
-channel with a blank value. If this argument is given for any color
-channel, that channel will not be used in the output.
+This is not actually a file type! But can be used to fill one color channel 
with a blank value.
+If this argument is given for any color channel, that channel will not be used 
in the output.
 
 @item Plain text
 @cindex Plain text
 @cindex Suffixes, plain text
-Plain text files have the advantage that they can be viewed with any text
-editor or on the command-line. Most programs also support input as plain
-text files. As input, each plain text file is considered to contain one
-color channel.
-
-In ConvertType, the recognized extensions for plain text files are
-@file{.txt} and @file{.dat}. As described in @ref{Invoking astconvertt}, if
-you just give these extensions, (and not a full filename) as output, then
-automatic output will be preformed to determine the final output name (see
-@ref{Automatic output}). Besides these, when the format of a file cannot be
-recognized from its name, ConvertType will fall back to plain text mode. So
-you can use any name (even without an extension) for a plain text input or
-output. Just note that when the suffix is not recognized, automatic output
-will not be preformed.
-
-The basic input/output on plain text images is very similar to how tables
-are read/written as described in @ref{Gnuastro text table format}. Simply
-put, the restrictions are very loose, and there is a convention to define a
-name, units, data type (see @ref{Numeric data types}), and comments for the
-data in a commented line. The only difference is that as a table, a text
-file can contain many datasets (columns), but as a 2D image, it can only
-contain one dataset. As a result, only one information comment line is
-necessary for a 2D image, and instead of the starting `@code{# Column N}'
-(@code{N} is the column number), the information line for a 2D image must
-start with `@code{# Image 1}'. When ConvertType is asked to output to plain
-text file, this information comment line is written before the image pixel
-values.
+Plain text files have the advantage that they can be viewed with any text 
editor or on the command-line.
+Most programs also support input as plain text files.
+As input, each plain text file is considered to contain one color channel.
 
-When converting an image to plain text, consider the fact that if the image
-is large, the number of columns in each line will become very large,
-possibly making it very hard to open in some text editors.
+In ConvertType, the recognized extensions for plain text files are @file{.txt} 
and @file{.dat}.
+As described in @ref{Invoking astconvertt}, if you just give these extensions, 
(and not a full filename) as output, then automatic output will be preformed to 
determine the final output name (see @ref{Automatic output}).
+Besides these, when the format of a file cannot be recognized from its name, 
ConvertType will fall back to plain text mode.
+So you can use any name (even without an extension) for a plain text input or 
output.
+Just note that when the suffix is not recognized, automatic output will not be 
preformed.
+
+The basic input/output on plain text images is very similar to how tables are 
read/written as described in @ref{Gnuastro text table format}.
+Simply put, the restrictions are very loose, and there is a convention to 
define a name, units, data type (see @ref{Numeric data types}), and comments 
for the data in a commented line.
+The only difference is that as a table, a text file can contain many datasets 
(columns), but as a 2D image, it can only contain one dataset.
+As a result, only one information comment line is necessary for a 2D image, 
and instead of the starting `@code{# Column N}' (@code{N} is the column 
number), the information line for a 2D image must start with `@code{# Image 1}'.
+When ConvertType is asked to output to plain text file, this information 
comment line is written before the image pixel values.
+
+When converting an image to plain text, consider the fact that if the image is 
large, the number of columns in each line will become very large, possibly 
making it very hard to open in some text editors.
 
 @item Standard output (command-line)
-This is very similar to the plain text output, but instead of creating a
-file to keep the printed values, they are printed on the command line. This
-can be very useful when you want to redirect the results directly to
-another program in one command with no intermediate file. The only
-difference is that only the pixel values are printed (with no information
-comment line). To print to the standard output, set the output name to
-`@file{stdout}'.
+This is very similar to the plain text output, but instead of creating a file 
to keep the printed values, they are printed on the command line.
+This can be very useful when you want to redirect the results directly to 
another program in one command with no intermediate file.
+The only difference is that only the pixel values are printed (with no 
information comment line).
+To print to the standard output, set the output name to `@file{stdout}'.
 
 @end table
 
@@ -11327,101 +8399,69 @@ comment line). To print to the standard output, set 
the output name to
 @cindex Pixels
 @cindex Colorspace
 @cindex Primary colors
-Color is defined by mixing various measurements/filters. In digital
-monitors or common digital cameras, colors are displayed/stored by mixing
-the three basic colors of red, green and blue (RGB) with various
-proportions. When printing on paper, standard printers use the cyan,
-magenta, yellow and key (CMYK, key=black) color space. In other words, for
-each displayed/printed pixel of a color image, the dataset/image has three
-or four values.
+Color is defined by mixing various measurements/filters.
+In digital monitors or common digital cameras, colors are displayed/stored by 
mixing the three basic colors of red, green and blue (RGB) with various 
proportions.
+When printing on paper, standard printers use the cyan, magenta, yellow and 
key (CMYK, key=black) color space.
+In other words, for each displayed/printed pixel of a color image, the 
dataset/image has three or four values.
 
 @cindex Color channel
 @cindex Channel, color
-To store/show the three values for each pixel, cameras and monitors
-allocate a certain fraction of each pixel's area to red, green and blue
-filters. These three filters are thus built into the hardware at the pixel
-level. However, because measurement accuracy is very important in
-scientific instruments, and we want to do measurements (take images) with
-various/custom filters (without having to order a new expensive detector!),
-scientific detectors use the full area of the pixel to store one value for
-it in a single/mono channel dataset. To make measurements in different
-filters, we just place a filter in the light path before the
-detector. Therefore, the FITS format that is used to store astronomical
-datasets is inherently a mono-channel format (see @ref{Recognized file
-formats} or @ref{Fits}).
-
-When a subject has been imaged in multiple filters, you can feed each
-different filter into the red, green and blue channels and obtain a colored
-visualization. In ConvertType, you can do this by giving each separate
-single-channel dataset (for example in the FITS image format) as an
-argument (in the proper order), then asking for the output in a format that
-supports multi-channel datasets (for example JPEG or PDF, see the examples
-in @ref{Invoking astconvertt}).
+To store/show the three values for each pixel, cameras and monitors allocate a 
certain fraction of each pixel's area to red, green and blue filters.
+These three filters are thus built into the hardware at the pixel level.
+However, because measurement accuracy is very important in scientific 
instruments, and we want to do measurements (take images) with various/custom 
filters (without having to order a new expensive detector!), scientific 
detectors use the full area of the pixel to store one value for it in a 
single/mono channel dataset.
+To make measurements in different filters, we just place a filter in the light 
path before the detector.
+Therefore, the FITS format that is used to store astronomical datasets is 
inherently a mono-channel format (see @ref{Recognized file formats} or 
@ref{Fits}).
+
+When a subject has been imaged in multiple filters, you can feed each 
different filter into the red, green and blue channels and obtain a colored 
visualization.
+In ConvertType, you can do this by giving each separate single-channel dataset 
(for example in the FITS image format) as an argument (in the proper order), 
then asking for the output in a format that supports multi-channel datasets 
(for example JPEG or PDF, see the examples in @ref{Invoking astconvertt}).
 
 @cindex Grayscale
 @cindex Visualization
 @cindex Colorspace, HSV
 @cindex Colorspace, gray-scale
 @cindex HSV: Hue Saturation Value
-As discussed above, color is not defined when a dataset/image contains a
-single value for each pixel. However, we interact with scientific datasets
-through monitors or printers (which allow multiple values per pixel and
-produce color with them). As a result, there is a lot of freedom in
-visualizing a single-channel dataset. The most basic is to use shades of
-black (because of its strong contrast with white). This scheme is called
-grayscale. To help in visualization, more complex mappings can be
-defined. For example, the values can be scaled to a range of 0 to 360 and
-used as the ``Hue'' term of the
-@url{https://en.wikipedia.org/wiki/HSL_and_HSV, Hue-Saturation-Value} (HSV)
-color space (while fixing the ``Saturation'' and ``Value'' terms). In
-ConvertType, you can use the @option{--colormap} option to choose between
-different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
-
-Since grayscale is a commonly used mapping of single-valued datasets, we'll
-continue with a closer look at how it is stored. One way to represent a
-gray-scale image in different color spaces is to use the same proportions
-of the primary colors in each pixel. This is the common way most FITS image
-viewers work: for each pixel, they fill all the channels with the single
-value. While this is necessary for displaying a dataset, there are
-downsides when storing/saving this type of grayscale visualization (for
-example in a paper).
+As discussed above, color is not defined when a dataset/image contains a 
single value for each pixel.
+However, we interact with scientific datasets through monitors or printers 
(which allow multiple values per pixel and produce color with them).
+As a result, there is a lot of freedom in visualizing a single-channel dataset.
+The most basic is to use shades of black (because of its strong contrast with 
white).
+This scheme is called grayscale.
+To help in visualization, more complex mappings can be defined.
+For example, the values can be scaled to a range of 0 to 360 and used as the 
``Hue'' term of the @url{https://en.wikipedia.org/wiki/HSL_and_HSV, 
Hue-Saturation-Value} (HSV) color space (while fixing the ``Saturation'' and 
``Value'' terms).
+In ConvertType, you can use the @option{--colormap} option to choose between 
different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
+
+Since grayscale is a commonly used mapping of single-valued datasets, we'll 
continue with a closer look at how it is stored.
+One way to represent a gray-scale image in different color spaces is to use 
the same proportions of the primary colors in each pixel.
+This is the common way most FITS image viewers work: for each pixel, they fill 
all the channels with the single value.
+While this is necessary for displaying a dataset, there are downsides when 
storing/saving this type of grayscale visualization (for example in a paper).
 
 @itemize
 
 @item
-Three (for RGB) or four (for CMYK) values have to be stored for every
-pixel, this makes the output file very heavy (in terms of bytes).
+Three (for RGB) or four (for CMYK) values have to be stored for every pixel, 
this makes the output file very heavy (in terms of bytes).
 
 @item
-If printing, the printing errors of each color channel can make the
-printed image slightly more blurred than it actually is.
+If printing, the printing errors of each color channel can make the printed 
image slightly more blurred than it actually is.
 
 @end itemize
 
 @cindex PNG standard
 @cindex Single channel CMYK
-To solve both these problems when storing grayscale visualization, the best
-way is to save a single-channel dataset into the black channel of the CMYK
-color space. The JPEG standard is the only common standard that accepts
-CMYK color space.
-
-The JPEG and EPS standards set two sizes for the number of bits in each
-channel: 8-bit and 12-bit. The former is by far the most common and is what
-is used in ConvertType. Therefore, each channel should have values between
-0 to @math{2^8-1=255}. From this we see how each pixel in a gray-scale
-image is one byte (8 bits) long, in an RGB image, it is 3 bytes long and in
-CMYK it is 4 bytes long. But thanks to the JPEG compression algorithms,
-when all the pixels of one channel have the same value, that channel is
-compressed to one pixel. Therefore a Grayscale image and a CMYK image that
-has only the K-channel filled are approximately the same file size.
+To solve both these problems when storing grayscale visualization, the best 
way is to save a single-channel dataset into the black channel of the CMYK 
color space.
+The JPEG standard is the only common standard that accepts CMYK color space.
+
+The JPEG and EPS standards set two sizes for the number of bits in each 
channel: 8-bit and 12-bit.
+The former is by far the most common and is what is used in ConvertType.
+Therefore, each channel should have values between 0 to @math{2^8-1=255}.
+From this we see how each pixel in a gray-scale image is one byte (8 bits) 
long, in an RGB image, it is 3 bytes long and in CMYK it is 4 bytes long.
+But thanks to the JPEG compression algorithms, when all the pixels of one 
channel have the same value, that channel is compressed to one pixel.
+Therefore a Grayscale image and a CMYK image that has only the K-channel 
filled are approximately the same file size.
 
 
 @node Invoking astconvertt,  , Color, ConvertType
 @subsection Invoking ConvertType
 
-ConvertType will convert any recognized input file type to any specified
-output type. The executable name is @file{astconvertt} with the following
-general template
+ConvertType will convert any recognized input file type to any specified 
output type.
+The executable name is @file{astconvertt} with the following general template
 
 @example
 $ astconvertt [OPTION...] InputFile [InputFile2] ... [InputFile4]
@@ -11453,65 +8493,43 @@ $ cat 2darray.txt | astconvertt -oimg.fits
 @end example
 
 @noindent
-The output's file format will be interpreted from the value given to the
-@option{--output} option. It can either be given on the command-line or in
-any of the configuration files (see @ref{Configuration files}). Note that
-if the output suffix is not recognized, it will default to plain text
-format, see @ref{Recognized file formats}.
+The output's file format will be interpreted from the value given to the 
@option{--output} option.
+It can either be given on the command-line or in any of the configuration 
files (see @ref{Configuration files}).
+Note that if the output suffix is not recognized, it will default to plain 
text format, see @ref{Recognized file formats}.
 
 @cindex Standard input
-At most four input files (one for each color channel for formats that allow
-it) are allowed in ConvertType. The first input dataset can either be a
-file or come from Standard input (see @ref{Standard input}). The order of
-multiple input files is important. After reading the input file(s) the
-number of color channels in all the inputs will be used to define which
-color space to use for the outputs and how each color channel is
-interpreted.
-
-Some formats can allow more than one color channel (for example in the JPEG
-format, see @ref{Recognized file formats}). If there is one input dataset
-(color channel) the output will be gray-scale, if three input datasets
-(color channels) are given, they are respectively considered to be the red,
-green and blue color channels. Finally, if there are four color channels
-they will be be cyan, magenta, yellow and black (CMYK colors).
-
-The value to @option{--output} (or @option{-o}) can be either a full file
-name or just the suffix of the desired output format. In the former case,
-it will used for the output. In the latter case, the name of the output
-file will be set based on the automatic output guidelines, see
-@ref{Automatic output}. Note that the suffix name can optionally start a
-@file{.} (dot), so for example @option{--output=.jpg} and
-@option{--output=jpg} are equivalent. See @ref{Recognized file formats}
-
-Besides the common set of options explained in @ref{Common options},
-the options to ConvertType can be classified into input, output and
-flux related options. The majority of the options are to do with the
-flux range. Astronomical data usually have a very large dynamic range
-(difference between maximum and minimum value) and different subjects
-might be better demonstrated with a limited flux range.
+At most four input files (one for each color channel for formats that allow 
it) are allowed in ConvertType.
+The first input dataset can either be a file or come from Standard input (see 
@ref{Standard input}).
+The order of multiple input files is important.
+After reading the input file(s) the number of color channels in all the inputs 
will be used to define which color space to use for the outputs and how each 
color channel is interpreted.
+
+Some formats can allow more than one color channel (for example in the JPEG 
format, see @ref{Recognized file formats}).
+If there is one input dataset (color channel) the output will be gray-scale, 
if three input datasets (color channels) are given, they are respectively 
considered to be the red, green and blue color channels.
+Finally, if there are four color channels they will be be cyan, magenta, 
yellow and black (CMYK colors).
+
+The value to @option{--output} (or @option{-o}) can be either a full file name 
or just the suffix of the desired output format.
+In the former case, it will used for the output.
+In the latter case, the name of the output file will be set based on the 
automatic output guidelines, see @ref{Automatic output}.
+Note that the suffix name can optionally start a @file{.} (dot), so for 
example @option{--output=.jpg} and @option{--output=jpg} are equivalent.
+See @ref{Recognized file formats}.
+
+Besides the common set of options explained in @ref{Common options}, the 
options to ConvertType can be classified into input, output and flux related 
options.
+The majority of the options are to do with the flux range.
+Astronomical data usually have a very large dynamic range (difference between 
maximum and minimum value) and different subjects might be better demonstrated 
with a limited flux range.
 
 @noindent
 Input:
 @table @option
 @item -h STR/INT
 @itemx --hdu=STR/INT
-In ConvertType, it is possible to call the HDU option multiple times for
-the different input FITS or TIFF files in the same order that they are
-called on the command-line. Note that in the TIFF standard, one `directory'
-(similar to a FITS HDU) may contain multiple color channels (for example
-when the image is in RGB).
-
-Except for the fact that multiple calls are possible, this option is
-identical to the common @option{--hdu} in @ref{Input output options}. The
-number of calls to this option cannot be less than the number of input FITS
-or TIFF files, but if there are more, the extra HDUs will be ignored, note
-that they will be read in the order described in @ref{Configuration file
-precedence}.
-
-Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes
-numbers (counting from zero, similar to CFITSIO) for `directory'
-identification. Hence the concept of names is not defined for the
-directories and the values to this option for TIFF files must be numbers.
+In ConvertType, it is possible to call the HDU option multiple times for the 
different input FITS or TIFF files in the same order that they are called on 
the command-line.
+Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may 
contain multiple color channels (for example when the image is in RGB).
+
+Except for the fact that multiple calls are possible, this option is identical 
to the common @option{--hdu} in @ref{Input output options}.
+The number of calls to this option cannot be less than the number of input 
FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note 
that they will be read in the order described in @ref{Configuration file 
precedence}.
+
+Unlike CFITSIO, libtiff (which is used to read TIFF files) only recognizes 
numbers (counting from zero, similar to CFITSIO) for `directory' identification.
+Hence the concept of names is not defined for the directories and the values 
to this option for TIFF files must be numbers.
 @end table
 
 @noindent
@@ -11520,117 +8538,88 @@ Output:
 
 @item -w FLT
 @itemx --widthincm=FLT
-The width of the output in centimeters. This is only relevant for those
-formats that accept such a width (not plain text for example). For most
-digital purposes, the number of pixels is far more important than the value
-to this parameter because you can adjust the absolute width (in inches or
-centimeters) in your document preparation program.
+The width of the output in centimeters.
+This is only relevant for those formats that accept such a width (not plain 
text for example).
+For most digital purposes, the number of pixels is far more important than the 
value to this parameter because you can adjust the absolute width (in inches or 
centimeters) in your document preparation program.
 
 @item -b INT
 @itemx --borderwidth=INT
 @cindex Border on an image
-The width of the border to be put around the EPS and PDF outputs in units
-of PostScript points. There are 72 or 28.35 PostScript points in an inch or
-centimeter respectively. In other words, there are roughly 3 PostScript
-points in every millimeter. If you are planning on adding a border, its
-significance is highly correlated with the value you give to the
-@option{--widthincm} parameter.
-
-Unfortunately in the document structuring convention of the PostScript
-language, the ``bounding box'' has to be in units of PostScript points
-with no fractions allowed. So the border values only have to be
-specified in integers. To have a final border that is thinner than one
-PostScript point in your document, you can ask for a larger width in
-ConvertType and then scale down the output EPS or PDF file in your
-document preparation program. For example by setting @command{width}
-in your @command{includegraphics} command in @TeX{} or @LaTeX{}. Since
-it is vector graphics, the changes of size have no effect on the
-quality of your output quality (pixels don't get different values).
+The width of the border to be put around the EPS and PDF outputs in units of 
PostScript points.
+There are 72 or 28.35 PostScript points in an inch or centimeter respectively.
+In other words, there are roughly 3 PostScript points in every millimeter.
+If you are planning on adding a border, its significance is highly correlated 
with the value you give to the @option{--widthincm} parameter.
+
+Unfortunately in the document structuring convention of the PostScript 
language, the ``bounding box'' has to be in units of PostScript points with no 
fractions allowed.
+So the border values only have to be specified in integers.
+To have a final border that is thinner than one PostScript point in your 
document, you can ask for a larger width in ConvertType and then scale down the 
output EPS or PDF file in your document preparation program.
+For example by setting @command{width} in your @command{includegraphics} 
command in @TeX{} or @LaTeX{}.
+Since it is vector graphics, the changes of size have no effect on the quality 
of your output quality (pixels don't get different values).
 
 @item -x
 @itemx --hex
 @cindex ASCII85 encoding
 @cindex Hexadecimal encoding
-Use Hexadecimal encoding in creating EPS output. By default the ASCII85
-encoding is used which provides a much better compression ratio. When
-converted to PDF (or included in @TeX{} or @LaTeX{} which is finally saved
-as a PDF file), an efficient binary encoding is used which is far more
-efficient than both of them. The choice of EPS encoding will thus have no
-effect on the final PDF.
-
-So if you want to transfer your EPS files (for example if you want to
-submit your paper to arXiv or journals in PostScript), their storage
-might become important if you have large images or lots of small
-ones. By default ASCII85 encoding is used which offers a much better
-compression ratio (nearly 40 percent) compared to Hexadecimal encoding.
+Use Hexadecimal encoding in creating EPS output.
+By default the ASCII85 encoding is used which provides a much better 
compression ratio.
+When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally 
saved as a PDF file), an efficient binary encoding is used which is far more 
efficient than both of them.
+The choice of EPS encoding will thus have no effect on the final PDF.
+
+So if you want to transfer your EPS files (for example if you want to submit 
your paper to arXiv or journals in PostScript), their storage might become 
important if you have large images or lots of small ones.
+By default ASCII85 encoding is used which offers a much better compression 
ratio (nearly 40 percent) compared to Hexadecimal encoding.
 
 @item -u INT
 @itemx --quality=INT
 @cindex JPEG compression quality
 @cindex Compression quality in JPEG
 @cindex Quality of compression in JPEG
-The quality (compression) of the output JPEG file with values from 0 to 100
-(inclusive). For other formats the value to this option is ignored. Note
-that only in gray-scale (when one input color channel is given) will this
-actually be the exact quality (each pixel will correspond to one input
-value). If it is in color mode, some degradation will occur. While the JPEG
-standard does support loss-less graphics, it is not commonly supported.
+The quality (compression) of the output JPEG file with values from 0 to 100 
(inclusive).
+For other formats the value to this option is ignored.
+Note that only in gray-scale (when one input color channel is given) will this 
actually be the exact quality (each pixel will correspond to one input value).
+If it is in color mode, some degradation will occur.
+While the JPEG standard does support loss-less graphics, it is not commonly 
supported.
 
 @item --colormap=STR[,FLT,...]
-The color map to visualize a single channel. The first value given to this
-option is the name of the color map, which is shown below. Some color maps
-can be configured. In this case, the configuration parameters are
-optionally given as numbers following the name of the color map for example
-see @option{hsv}. The table below contains the usable names of the color
-maps that are currently supported:
+The color map to visualize a single channel.
+The first value given to this option is the name of the color map, which is 
shown below.
+Some color maps can be configured.
+In this case, the configuration parameters are optionally given as numbers 
following the name of the color map for example see @option{hsv}.
+The table below contains the usable names of the color maps that are currently 
supported:
 
 @table @option
 @item gray
 @itemx grey
 @cindex Colorspace, gray-scale
-Grayscale color map. This color map doesn't have any parameters. The full
-dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored in
-the requested format.
+Grayscale color map.
+This color map doesn't have any parameters.
+The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored 
in the requested format.
 
 @item hsv
 @cindex Colorspace, HSV
 @cindex Hue, saturation, value
 @cindex HSV: Hue Saturation Value
-Hue, Saturation,
-Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color
-map. If no values are given after the name (@option{--colormap=hsv}), the
-dataset will be scaled to 0 and 360 for hue covering the full spectrum of
-colors. However, you can limit the range of hue (to show only a special
-color range) by explicitly requesting them after the name (for example
-@option{--colormap=hsv,20,240}).
-
-The mapping of a single-channel dataset to HSV is done through the Hue and
-Value elements: Lower dataset elements have lower ``value'' @emph{and}
-lower ``hue''. This creates darker colors for fainter parts, while also
-respecting the range of colors.
+Hue, Saturation, 
Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
+If no values are given after the name (@option{--colormap=hsv}), the dataset 
will be scaled to 0 and 360 for hue covering the full spectrum of colors.
+However, you can limit the range of hue (to show only a special color range) 
by explicitly requesting them after the name (for example 
@option{--colormap=hsv,20,240}).
+
+The mapping of a single-channel dataset to HSV is done through the Hue and 
Value elements: Lower dataset elements have lower ``value'' @emph{and} lower 
``hue''.
+This creates darker colors for fainter parts, while also respecting the range 
of colors.
 
 @item sls
 @cindex DS9
 @cindex SAO DS9
 @cindex SLS Color
 @cindex Colorspace: SLS
-The SLS color range, taken from the commonly used
-@url{http://ds9.si.edu,SAO DS9}. The advantage of this color range is that
-it ranges from black to dark blue, and finishes with red and white. So
-unlike the HSV color range, it includes black and white and brighter colors
-(like yellow, red and white) show the larger values.
+The SLS color range, taken from the commonly used @url{http://ds9.si.edu,SAO 
DS9}.
+The advantage of this color range is that it ranges from black to dark blue, 
and finishes with red and white.
+So unlike the HSV color range, it includes black and white and brighter colors 
(like yellow, red and white) show the larger values.
 @end table
 
 @item --rgbtohsv
-When there are three input channels and the output is in the FITS format,
-interpret the three input channels as red, green and blue channels (RGB)
-and convert them to the hue, saturation, value (HSV) color space.
+When there are three input channels and the output is in the FITS format, 
interpret the three input channels as red, green and blue channels (RGB) and 
convert them to the hue, saturation, value (HSV) color space.
 
-The currently supported output formats of ConvertType don't have native
-support for HSV. Therefore this option is only supported when the output is
-in FITS format and each of the hue, saturation and value arrays can be
-saved as one FITS extension in the output for further analysis (for example
-to select a certain color).
+The currently supported output formats of ConvertType don't have native 
support for HSV.
+Therefore this option is only supported when the output is in FITS format and 
each of the hue, saturation and value arrays can be saved as one FITS extension 
in the output for further analysis (for example to select a certain color).
 
 @end table
 
@@ -11642,22 +8631,15 @@ Flux range:
 @item -c STR
 @itemx --change=STR
 @cindex Change converted pixel values
-(@option{=STR}) Change pixel values with the following format
-@option{"from1:to1, from2:to2,..."}. This option is very useful in
-displaying labeled pixels (not actual data images which have noise)
-like segmentation maps. In labeled images, usually a group of pixels
-have a fixed integer value. With this option, you can manipulate the
-labels before the image is displayed to get a better output for print
-or to emphasize on a particular set of labels and ignore the rest. The
-labels in the images will be changed in the same order given. By
-default first the pixel values will be converted then the pixel values
-will be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
-
-You can use any number for the values irrespective of your final
-output, your given values are stored and used in the double precision
-floating point format. So for example if your input image has labels
-from 1 to 20000 and you only want to display those with labels 957 and
-11342 then you can run ConvertType with these options:
+(@option{=STR}) Change pixel values with the following format 
@option{"from1:to1, from2:to2,..."}.
+This option is very useful in displaying labeled pixels (not actual data 
images which have noise) like segmentation maps.
+In labeled images, usually a group of pixels have a fixed integer value.
+With this option, you can manipulate the labels before the image is displayed 
to get a better output for print or to emphasize on a particular set of labels 
and ignore the rest.
+The labels in the images will be changed in the same order given.
+By default first the pixel values will be converted then the pixel values will 
be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
+
+You can use any number for the values irrespective of your final output, your 
given values are stored and used in the double precision floating point format.
+So for example if your input image has labels from 1 to 20000 and you only 
want to display those with labels 957 and 11342 then you can run ConvertType 
with these options:
 
 @example
 $ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
@@ -11665,24 +8647,19 @@ $ astconvertt --change=957:50000,11342:50001 
--fluxlow=5e4 \
 @end example
 
 @noindent
-While the output JPEG format is only 8 bit, this operation is done in
-an intermediate step which is stored in double precision floating
-point. The pixel values are converted to 8-bit after all operations on
-the input fluxes have been complete. By placing the value in double
-quotes you can use as many spaces as you like for better readability.
+While the output JPEG format is only 8 bit, this operation is done in an 
intermediate step which is stored in double precision floating point.
+The pixel values are converted to 8-bit after all operations on the input 
fluxes have been complete.
+By placing the value in double quotes you can use as many spaces as you like 
for better readability.
 
 @item -C
 @itemx --changeaftertrunc
-Change pixel values (with @option{--change}) after truncation of the
-flux values, by default it is the opposite.
+Change pixel values (with @option{--change}) after truncation of the flux 
values, by default it is the opposite.
 
 @item -L FLT
 @itemx --fluxlow=FLT
-The minimum flux (pixel value) to display in the output image, any pixel
-value below this value will be set to this value in the output. If the
-value to this option is the same as @option{--fluxhigh}, then no flux
-truncation will be applied. Note that when multiple channels are given,
-this value is used for all the color channels.
+The minimum flux (pixel value) to display in the output image, any pixel value 
below this value will be set to this value in the output.
+If the value to this option is the same as @option{--fluxhigh}, then no flux 
truncation will be applied.
+Note that when multiple channels are given, this value is used for all the 
color channels.
 
 @item -H FLT
 @itemx --fluxhigh=FLT
@@ -11691,33 +8668,24 @@ The maximum flux (pixel value) to display in the 
output image, see
 
 @item -m INT
 @itemx --maxbyte=INT
-This is only used for the JPEG and EPS output formats which have an 8-bit
-space for each channel of each pixel. The maximum value in each pixel can
-therefore be @mymath{2^8-1=255}. With this option you can change (decrease)
-the maximum value. By doing so you will decrease the dynamic range. It can
-be useful if you plan to use those values for other purposes.
+This is only used for the JPEG and EPS output formats which have an 8-bit 
space for each channel of each pixel.
+The maximum value in each pixel can therefore be @mymath{2^8-1=255}.
+With this option you can change (decrease) the maximum value.
+By doing so you will decrease the dynamic range.
+It can be useful if you plan to use those values for other purposes.
 
 @item -A INT
 @itemx --forcemin=INT
-Enforce the value of @option{--fluxlow} (when its given), even if its
-smaller than the minimum of the dataset and the output is format supporting
-color. This is particularly useful when you are converting a number of
-images to a common image format like JPEG or PDF with a single command and
-want them all to have the same range of colors, independent of the contents
-of the dataset. Note that if the minimum value is smaller than
-@option{--fluxlow}, then this option is redundant.
+Enforce the value of @option{--fluxlow} (when its given), even if its smaller 
than the minimum of the dataset and the output is format supporting color.
+This is particularly useful when you are converting a number of images to a 
common image format like JPEG or PDF with a single command and want them all to 
have the same range of colors, independent of the contents of the dataset.
+Note that if the minimum value is smaller than @option{--fluxlow}, then this 
option is redundant.
 
 @cindex PDF
 @cindex EPS
 @cindex PostScript
-By default, when the dataset only has two values, @emph{and} the output
-format is PDF or EPS, ConvertType will use the PostScript optimization that
-allows setting the pixel values per bit, not byte (@ref{Recognized file
-formats}). This can greatly help reduce the file size. However, when
-@option{--fluxlow} or @option{--fluxhigh} are called, this optimization is
-disabled: even though there are only two values (is binary), the
-difference between them does not correspond to the full contrast of black
-and white.
+By default, when the dataset only has two values, @emph{and} the output format 
is PDF or EPS, ConvertType will use the PostScript optimization that allows 
setting the pixel values per bit, not byte (@ref{Recognized file formats}).
+This can greatly help reduce the file size.
+However, when @option{--fluxlow} or @option{--fluxhigh} are called, this 
optimization is disabled: even though there are only two values (is binary), 
the difference between them does not correspond to the full contrast of black 
and white.
 
 @item -B INT
 @itemx --forcemax=INT
@@ -11726,67 +8694,43 @@ Similar to @option{--forcemin}, but for the maximum.
 
 @item -i
 @itemx --invert
-For 8-bit output types (JPEG, EPS, and PDF for example) the final value
-that is stored is inverted so white becomes black and vice versa. The
-reason for this is that astronomical images usually have a very large area
-of blank sky in them. The result will be that a large are of the image will
-be black. Note that this behavior is ideal for gray-scale images, if you
-want a color image, the colors are going to be mixed up.
+For 8-bit output types (JPEG, EPS, and PDF for example) the final value that 
is stored is inverted so white becomes black and vice versa.
+The reason for this is that astronomical images usually have a very large area 
of blank sky in them.
+The result will be that a large are of the image will be black.
+Note that this behavior is ideal for gray-scale images, if you want a color 
image, the colors are going to be mixed up.
 
 @end table
 
 @node Table,  , ConvertType, Data containers
 @section Table
 
-Tables are the products of processing astronomical images and spectra. For
-example in Gnuastro, MakeCatalog will process the defined pixels over an
-object and produce a catalog (see @ref{MakeCatalog}). For each identified
-object, MakeCatalog can print its position on the image or sky, its total
-brightness and many other information that is deducible from the given
-image. Each one of these properties is a column in its output catalog (or
-table) and for each input object, we have a row.
-
-When there are only a small number of objects (rows) and not too many
-properties (columns), then a simple plain text file is mainly enough to
-store, transfer, or even use the produced data. However, to be more
-efficient in all these aspects, astronomers have defined the FITS binary
-table standard to store data in a binary (0 and 1) format, not plain
-text. This can offer major advantages in all those aspects: the file size
-will be greatly reduced and the reading and writing will be faster (because
-the RAM and CPU also work in binary).
-
-The FITS standard also defines a standard for ASCII tables, where the data
-are stored in the human readable ASCII format, but within the FITS file
-structure. These are mainly useful for keeping ASCII data along with images
-and possibly binary data as multiple (conceptually related) extensions
-within a FITS file. The acceptable table formats are fully described in
-@ref{Tables}.
+Tables are the products of processing astronomical images and spectra.
+For example in Gnuastro, MakeCatalog will process the defined pixels over an 
object and produce a catalog (see @ref{MakeCatalog}).
+For each identified object, MakeCatalog can print its position on the image or 
sky, its total brightness and many other information that is deducible from the 
given image.
+Each one of these properties is a column in its output catalog (or table) and 
for each input object, we have a row.
+
+When there are only a small number of objects (rows) and not too many 
properties (columns), then a simple plain text file is mainly enough to store, 
transfer, or even use the produced data.
+However, to be more efficient in all these aspects, astronomers have defined 
the FITS binary table standard to store data in a binary (0 and 1) format, not 
plain text.
+This can offer major advantages in all those aspects: the file size will be 
greatly reduced and the reading and writing will be faster (because the RAM and 
CPU also work in binary).
+
+The FITS standard also defines a standard for ASCII tables, where the data are 
stored in the human readable ASCII format, but within the FITS file structure.
+These are mainly useful for keeping ASCII data along with images and possibly 
binary data as multiple (conceptually related) extensions within a FITS file.
+The acceptable table formats are fully described in @ref{Tables}.
 
 @cindex AWK
 @cindex GNU AWK
-Binary tables are not easily readable by human eyes. There is no
-fixed/unified standard on how the zero and ones should be interpreted. The
-Unix-like operating systems have flourished because of a simple fact:
-communication between the various tools is based on human readable
-characters@footnote{In ``The art of Unix programming'', Eric Raymond makes
-this suggestion to programmers: ``When you feel the urge to design a
-complex binary file format, or a complex binary application protocol, it is
-generally wise to lie down until the feeling passes.''. This is a great
-book and strongly recommended, give it a look if you want to truly enjoy
-your work/life in this environment.}. So while the FITS table standards are
-very beneficial for the tools that recognize them, they are hard to use in
-the vast majority of available software. This creates limitations for their
-generic use.
-
-`Table' is Gnuastro's solution to this problem. With Table, FITS tables
-(ASCII or binary) are directly accessible to the Unix-like operating
-systems power-users (those working the command-line or shell, see
-@ref{Command-line interface}). With Table, a FITS table (in binary or ASCII
-formats) is only one command away from AWK (or any other tool you want to
-use). Just like a plain text file that you read with the @command{cat}
-command. You can pipe the output of Table into any other tool for
-higher-level processing, see the examples in @ref{Invoking asttable} for
-some simple examples.
+Binary tables are not easily readable by human eyes.
+There is no fixed/unified standard on how the zero and ones should be 
interpreted.
+The Unix-like operating systems have flourished because of a simple fact: 
communication between the various tools is based on human readable 
characters@footnote{In ``The art of Unix programming'', Eric Raymond makes this 
suggestion to programmers: ``When you feel the urge to design a complex binary 
file format, or a complex binary application protocol, it is generally wise to 
lie down until the feeling passes.''.
+This is a great book and strongly recommended, give it a look if you want to 
truly enjoy your work/life in this environment.}.
+So while the FITS table standards are very beneficial for the tools that 
recognize them, they are hard to use in the vast majority of available software.
+This creates limitations for their generic use.
+
+`Table' is Gnuastro's solution to this problem.
+With Table, FITS tables (ASCII or binary) are directly accessible to the 
Unix-like operating systems power-users (those working the command-line or 
shell, see @ref{Command-line interface}).
+With Table, a FITS table (in binary or ASCII formats) is only one command away 
from AWK (or any other tool you want to use).
+Just like a plain text file that you read with the @command{cat} command.
+You can pipe the output of Table into any other tool for higher-level 
processing, see the examples in @ref{Invoking asttable} for some simple 
examples.
 
 @menu
 * Column arithmetic::           How to do operations on table columns.
@@ -11796,86 +8740,57 @@ some simple examples.
 @node Column arithmetic, Invoking asttable, Table, Table
 @subsection Column arithmetic
 
-After reading the requested columns from the input table, you can also do
-operations/arithmetic on the columns and save the resulting values as new
-column(s) in the output table (possibly in between other requested
-columns). To enable column arithmetic, the first 6 characters of the value
-to @option{--column} (@code{-c}) should be the arithmetic activation word
-`@option{arith }' (note the space character in the end, after
-`@code{arith}').
-
-After the activation word, you can use the reverse polish notation to
-identify the operators and their operands, see @ref{Reverse polish
-notation}. Just note that white-space characters are used between the
-tokens of the arithmetic expression and that they are meaningful to the
-command-line environment. Therefore the whole expression (including the
-activation word) has to be quoted on the command-line or in a shell script
-(see the examples below).
-
-To identify a column you can directly use its name, or specify its number
-(counting from one, see @ref{Selecting table columns}). When you are giving
-a column number, it is necessary to prefix it with a @code{c} (otherwise it
-is not distinguishable from a constant number to use in the arithmetic
-operation).
-
-For example with the command below, the first two columns of
-@file{table.fits} will be printed along with a third column that is the
-result of multiplying the first column with @mymath{10^{10}} (for example
-to convert wavelength from Meters to Angstroms). Note how without the
-`@key{c}', it is not possible to distinguish between @key{1} as a
-column-counter, or as a constant number to use in the arithmetic operation.
+After reading the requested columns from the input table, you can also do 
operations/arithmetic on the columns and save the resulting values as new 
column(s) in the output table (possibly in between other requested columns).
+To enable column arithmetic, the first 6 characters of the value to 
@option{--column} (@code{-c}) should be the arithmetic activation word 
`@option{arith }' (note the space character in the end, after `@code{arith}').
+
+After the activation word, you can use the reverse polish notation to identify 
the operators and their operands, see @ref{Reverse polish notation}.
+Just note that white-space characters are used between the tokens of the 
arithmetic expression and that they are meaningful to the command-line 
environment.
+Therefore the whole expression (including the activation word) has to be 
quoted on the command-line or in a shell script (see the examples below).
+
+To identify a column you can directly use its name, or specify its number 
(counting from one, see @ref{Selecting table columns}).
+When you are giving a column number, it is necessary to prefix it with a 
@code{c} (otherwise it is not distinguishable from a constant number to use in 
the arithmetic operation).
+
+For example with the command below, the first two columns of @file{table.fits} 
will be printed along with a third column that is the result of multiplying the 
first column with @mymath{10^{10}} (for example to convert wavelength from 
Meters to Angstroms).
+Note how without the `@key{c}', it is not possible to distinguish between 
@key{1} as a column-counter, or as a constant number to use in the arithmetic 
operation.
 
 @example
 $ asttable table.fits -c1,2 -c"arith c1 1e10 x"
 @end example
 
-Alternatively, if the columns have meta-data and the first two are
-respectively called @code{AWAV} and @code{SPECTRUM}, the command above is
-equivalent to the command below. Note that the @key{c} is no longer
-necessary in this scenario.
+Alternatively, if the columns have meta-data and the first two are 
respectively called @code{AWAV} and @code{SPECTRUM}, the command above is 
equivalent to the command below.
+Note that the @key{c} is no longer necessary in this scenario.
 
 @example
 $ asttable table.fits -cAWAV,SPECTRUM -c"arith AWAV 1e10 x"
 @end example
 
-Comparison of the two commands above clearly shows why it is recommended to
-use column names instead of numbers. When the columns have clear names, the
-command/script actually becomes much more readable, describing the intent
-of the operation. It is also independent of the low-level table structure:
-for the second command, the position of the @code{AWAV} and @code{SPECTRUM}
-columns in @file{table.fits} is irrelevant.
+Comparison of the two commands above clearly shows why it is recommended to 
use column names instead of numbers.
+When the columns have clear names, the command/script actually becomes much 
more readable, describing the intent of the operation.
+It is also independent of the low-level table structure: for the second 
command, the position of the @code{AWAV} and @code{SPECTRUM} columns in 
@file{table.fits} is irrelevant.
 
-Finally, since the arithmetic expressions are a value to @option{--column},
-it doesn't necessarily have to be a separate option, so the commands above
-are also identical to the command below (note that this only has one
-@option{-c} option). Just be very careful with the quoting!
+Finally, since the arithmetic expressions are a value to @option{--column}, it 
doesn't necessarily have to be a separate option, so the commands above are 
also identical to the command below (note that this only has one @option{-c} 
option).
+Just be very careful with the quoting!
 
 @example
 $ asttable table.fits -cAWAV,SPECTRUM,"arith AWAV 1e10 x"
 @end example
 
-Almost all the arithmetic operators of @ref{Arithmetic operators} are also
-supported for column arithmetic in Table. In particular, the few that are
-not present in the Gnuastro library aren't yet supported. For a list of the
-Gnuastro library arithmetic operators, please see the macros starting with
-@code{GAL_ARITHMETIC_OP} and ending with the operator name in
-@ref{Arithmetic on datasets}. Besides the operators in @ref{Arithmetic
-operators}, several operators are only available in Table to use on table
-columns.
+Almost all the arithmetic operators of @ref{Arithmetic operators} are also 
supported for column arithmetic in Table.
+In particular, the few that are not present in the Gnuastro library aren't yet 
supported.
+For a list of the Gnuastro library arithmetic operators, please see the macros 
starting with @code{GAL_ARITHMETIC_OP} and ending with the operator name in 
@ref{Arithmetic on datasets}.
+Besides the operators in @ref{Arithmetic operators}, several operators are 
only available in Table to use on table columns.
 
 @cindex WCS: World Coordinate System
 @cindex World Coordinate System (WCS)
 @table @code
 @item wcstoimg
-Convert the given WCS positions to image/dataset coordinates based on the
-number of dimensions in the WCS structure of @option{--wcshdu}
-extension/HDU in @option{--wcsfile}. It will output the same number of
-columns. The first popped operand is the last FITS dimension.
+Convert the given WCS positions to image/dataset coordinates based on the 
number of dimensions in the WCS structure of @option{--wcshdu} extension/HDU in 
@option{--wcsfile}.
+It will output the same number of columns.
+The first popped operand is the last FITS dimension.
 
-For example the two commands below (which have the same output) will
-produce 5 columns. The first three columns are the input table's ID, RA and
-Dec columns. The fourth and fifth columns will be the pixel positions in
-@file{image.fits} that correspond to each RA and Dec.
+For example the two commands below (which have the same output) will produce 5 
columns.
+The first three columns are the input table's ID, RA and Dec columns.
+The fourth and fifth columns will be the pixel positions in @file{image.fits} 
that correspond to each RA and Dec.
 
 @example
 $ asttable table.fits -cID,RA,DEC,"arith RA DEC wcstoimg" \
@@ -11885,19 +8800,16 @@ $ asttable table.fits -cID,RA -cDEC \
 @end example
 
 @item imgtowcs
-Similar to @code{wcstoimg}, except that image/dataset coordinates are
-converted to WCS coordinates.
+Similar to @code{wcstoimg}, except that image/dataset coordinates are 
converted to WCS coordinates.
 @end table
 
 
 @node Invoking asttable,  , Column arithmetic, Table
 @subsection Invoking Table
 
-Table will read/write, select, convert, or show the information of the
-columns in FITS ASCII table, FITS binary table and plain text table files,
-see @ref{Tables}. Output columns can also be determined by number or
-regular expression matching of column names, units, or comments. The
-executable name is @file{asttable} with the following general template
+Table will read/write, select, convert, or show the information of the columns 
in FITS ASCII table, FITS binary table and plain text table files, see 
@ref{Tables}.
+Output columns can also be determined by number or regular expression matching 
of column names, units, or comments.
+The executable name is @file{asttable} with the following general template
 
 @example
 $ asttable [OPTION...] InputFile
@@ -11939,124 +8851,89 @@ $ asttable bintab.fits --sort=3 -ooutput.txt
 $ awk cat.txt -c"arith c2 c1 -",3,4 -ocat.fits
 @end example
 
-Table's input dataset can be given either as a file or from Standard input
-(see @ref{Standard input}). In the absence of selected columns, all the
-input's columns and rows will be written to the output. If any output file
-is explicitly requested (with @option{--output}) the output table will be
-written in it. When no output file is explicitly requested the output table
-will be written to the standard output.
-
-If the specified output is a FITS file, the type of FITS table (binary or
-ASCII) will be determined from the @option{--tabletype} option. If the
-output is not a FITS file, it will be printed as a plain text table (with
-space characters between the columns). When the columns are accompanied by
-meta-data (like column name, units, or comments), this information will
-also printed in the plain text file before the table, as described in
-@ref{Gnuastro text table format}.
-
-For the full list of options common to all Gnuastro programs please see
-@ref{Common options}. Options can also be stored in directory, user or
-system-wide configuration files to avoid repeating on the command-line, see
-@ref{Configuration files}. Table does not follow Automatic output that is
-common in most Gnuastro programs, see @ref{Automatic output}. Thus, in the
-absence of an output file, the selected columns will be printed on the
-command-line with no column information, ready for redirecting to other
-tools like AWK or sort, similar to the examples above.
+Table's input dataset can be given either as a file or from Standard input 
(see @ref{Standard input}).
+In the absence of selected columns, all the input's columns and rows will be 
written to the output.
+If any output file is explicitly requested (with @option{--output}) the output 
table will be written in it.
+When no output file is explicitly requested the output table will be written 
to the standard output.
+
+If the specified output is a FITS file, the type of FITS table (binary or 
ASCII) will be determined from the @option{--tabletype} option.
+If the output is not a FITS file, it will be printed as a plain text table 
(with space characters between the columns).
+When the columns are accompanied by meta-data (like column name, units, or 
comments), this information will also printed in the plain text file before the 
table, as described in @ref{Gnuastro text table format}.
+
+For the full list of options common to all Gnuastro programs please see 
@ref{Common options}.
+Options can also be stored in directory, user or system-wide configuration 
files to avoid repeating on the command-line, see @ref{Configuration files}.
+Table does not follow Automatic output that is common in most Gnuastro 
programs, see @ref{Automatic output}.
+Thus, in the absence of an output file, the selected columns will be printed 
on the command-line with no column information, ready for redirecting to other 
tools like AWK or sort, similar to the examples above.
 
 @table @option
 
 @item -i
 @itemx --information
-Only print the column information in the specified table on the
-command-line and exit. Each column's information (number, name, units, data
-type, and comments) will be printed as a row on the command-line. Note that
-the FITS standard only requires the data type (see @ref{Numeric data
-types}), and in plain text tables, no meta-data/information is
-mandatory. Gnuastro has its own convention in the comments of a plain text
-table to store and transfer this information as described in @ref{Gnuastro
-text table format}.
-
-This option will take precedence over the @option{--column} option, so when
-it is called along with requested columns, the latter will be ignored. This
-can be useful if you forget the identifier of a column after you have
-already typed some on the command-line. You can simply add a @option{-i}
-and run Table to see the whole list and remember. Then you can use the
-shell history (with the up arrow key on the keyboard), and retrieve the
-last command with all the previously typed columns present, delete
-@option{-i} and add the identifier you had forgot.
+Only print the column information in the specified table on the command-line 
and exit.
+Each column's information (number, name, units, data type, and comments) will 
be printed as a row on the command-line.
+Note that the FITS standard only requires the data type (see @ref{Numeric data 
types}), and in plain text tables, no meta-data/information is mandatory.
+Gnuastro has its own convention in the comments of a plain text table to store 
and transfer this information as described in @ref{Gnuastro text table format}.
+
+This option will take precedence over the @option{--column} option, so when it 
is called along with requested columns, the latter will be ignored.
+This can be useful if you forget the identifier of a column after you have 
already typed some on the command-line.
+You can simply add a @option{-i} and run Table to see the whole list and 
remember.
+Then you can use the shell history (with the up arrow key on the keyboard), 
and retrieve the last command with all the previously typed columns present, 
delete @option{-i} and add the identifier you had forgot.
 
 @cindex AWK
 @cindex GNU AWK
 @item -c STR/INT
 @itemx --column=STR/INT
-Set the output columns either by specifying the column number, or name. For
-more on selecting columns, see @ref{Selecting table columns}. If a value of
-this option starts with `@code{arith }', this option will do the requested
-operations/arithmetic on the specified columns and output the result in
-that place (among other requested columns). For more on column arithmetic
-see @ref{Column arithmetic}.
-
-To ask for multiple columns this option can be used in two way: 1) multiple
-calls to this option, 2) using a comma between each column specifier in one
-call to this option. These different solutions may be mixed in one call to
-Table: for example, @option{-cRA,DEC -cMAG}, or @option{-cRA -cDEC -cMAG}
-are both equivalent to @option{-cRA -cDEC -cMAG}. The order of the output
-columns will be the same order given to the option or in the configuration
-files (see @ref{Configuration file precedence}).
-
-This option is not mandatory, if no specific columns are requested, all the
-input table columns are output. When this option is called multiple times,
-it is possible to output one column more than once.
+Set the output columns either by specifying the column number, or name.
+For more on selecting columns, see @ref{Selecting table columns}.
+If a value of this option starts with `@code{arith }', this option will do the 
requested operations/arithmetic on the specified columns and output the result 
in that place (among other requested columns).
+For more on column arithmetic see @ref{Column arithmetic}.
+
+To ask for multiple columns this option can be used in two way: 1) multiple 
calls to this option, 2) using a comma between each column specifier in one 
call to this option.
+These different solutions may be mixed in one call to Table: for example, 
@option{-cRA,DEC -cMAG}, or @option{-cRA -cDEC -cMAG} are both equivalent to 
@option{-cRA -cDEC -cMAG}.
+The order of the output columns will be the same order given to the option or 
in the configuration files (see @ref{Configuration file precedence}).
+
+This option is not mandatory, if no specific columns are requested, all the 
input table columns are output.
+When this option is called multiple times, it is possible to output one column 
more than once.
 
 @item -w STR
 @itemx --wcsfile=STR
-FITS file that contains the WCS to be used in the @code{wcstoimg} and
-@code{imgtowcs} operators of @option{--column} (see above). The extension
-name/number within the FITS file can be specified with @option{--wcshdu}.
+FITS file that contains the WCS to be used in the @code{wcstoimg} and 
@code{imgtowcs} operators of @option{--column} (see above).
+The extension name/number within the FITS file can be specified with 
@option{--wcshdu}.
 
 @item -W STR
 @itemx --wcshdu=STR
-FITS extension/HDU that contains the WCS to be used in the @code{wcstoimg}
-and @code{imgtowcs} operators of @option{--column} (see above). The FITS
-file name can be specified with @option{--wcsfile}.
+FITS extension/HDU that contains the WCS to be used in the @code{wcstoimg} and 
@code{imgtowcs} operators of @option{--column} (see above).
+The FITS file name can be specified with @option{--wcsfile}.
 
 @item -O
 @itemx --colinfoinstdout
 @cindex Standard output
-Add column metadata when the output is printed in the standard
-output. Usually the standard output is used for a fast visual check or to
-pipe into other program for further processing. So by default meta-data
-aren't included.
+Add column metadata when the output is printed in the standard output.
+Usually the standard output is used for a fast visual check or to pipe into 
other program for further processing.
+So by default meta-data aren't included.
 
 @item -r STR,FLT:FLT
 @itemx --range=STR,FLT:FLT
-Only print the output rows that have a value within the given range in the
-@code{STR} column (can be a name or counter). Note that the range is only
-inclusive in the lower-limit. For example with @code{--range=sn,5:20} the
-output's columns will only contain rows that have a value in the @code{sn}
-column (not case-sensitive) that is greater or equal to 5, and less than
-20.
+Only print the output rows that have a value within the given range in the 
@code{STR} column (can be a name or counter).
+Note that the range is only inclusive in the lower-limit.
+For example with @code{--range=sn,5:20} the output's columns will only contain 
rows that have a value in the @code{sn} column (not case-sensitive) that is 
greater or equal to 5, and less than 20.
 
-This option can be called multiple times (different ranges for different
-columns) in one run of the Table program. This is very useful for selecting
-the final rows from multiple criteria/columns.
+This option can be called multiple times (different ranges for different 
columns) in one run of the Table program.
+This is very useful for selecting the final rows from multiple 
criteria/columns.
 
-The chosen column doesn't have to be in the output columns. This is good
-when you just want to select using one column's values, but don't need that
-column anymore afterwards.
+The chosen column doesn't have to be in the output columns.
+This is good when you just want to select using one column's values, but don't 
need that column anymore afterwards.
 
 For one example of using this option, see the example under
 @option{--sigclip-median} in @ref{Invoking aststatistics}.
 
 @item -s STR
 @item --sort=STR
-Sort the output rows based on the values in the @code{STR} column (can be a
-column name or number). By default the sort is done in ascending/increasing
-order, to sort in a descending order, use @option{--descending}.
+Sort the output rows based on the values in the @code{STR} column (can be a 
column name or number).
+By default the sort is done in ascending/increasing order, to sort in a 
descending order, use @option{--descending}.
 
-The chosen column doesn't have to be in the output columns. This is good
-when you just want to sort using one column's values, but don't need that
-column anymore afterwards.
+The chosen column doesn't have to be in the output columns.
+This is good when you just want to sort using one column's values, but don't 
need that column anymore afterwards.
 
 @item -d
 @itemx --descending
@@ -12064,22 +8941,18 @@ When called with @option{--sort}, rows will be sorted 
in descending order.
 
 @item -H INT
 @itemx --head=INT
-Only print the given number of rows from the @emph{top} of the final
-table. Note that this option only affects the @emph{output} table. For
-example if you use @option{--sort}, or @option{--range}, the printed rows
-are the first @emph{after} applying the sort sorting, or selecting a range
-of the full input.
+Only print the given number of rows from the @emph{top} of the final table.
+Note that this option only affects the @emph{output} table.
+For example if you use @option{--sort}, or @option{--range}, the printed rows 
are the first @emph{after} applying the sort sorting, or selecting a range of 
the full input.
 
 @cindex GNU Coreutils
-If the given value to @option{--head} is 0, the output columns won't have
-any rows and if its larger than the number of rows in the input table, all
-the rows are printed (this option is effectively ignored). This behavior is
-taken from the @command{head} program in GNU Coreutils.
+If the given value to @option{--head} is 0, the output columns won't have any 
rows and if its larger than the number of rows in the input table, all the rows 
are printed (this option is effectively ignored).
+This behavior is taken from the @command{head} program in GNU Coreutils.
 
 @item -t INT
 @itemx --tail=INT
-Only print the given number of rows from the @emph{bottom} of the final
-table. See @option{--head} for more.
+Only print the given number of rows from the @emph{bottom} of the final table.
+See @option{--head} for more.
 
 @end table
 
@@ -12102,11 +8975,9 @@ table. See @option{--head} for more.
 @node Data manipulation, Data analysis, Data containers, Top
 @chapter Data manipulation
 
-Images are one of the major formats of data that is used in astronomy. The
-functions in this chapter explain the GNU Astronomy Utilities which are
-provided for their manipulation. For example cropping out a part of a
-larger image or convolving the image with a given kernel or applying a
-transformation to it.
+Images are one of the major formats of data that is used in astronomy.
+The functions in this chapter explain the GNU Astronomy Utilities which are 
provided for their manipulation.
+For example cropping out a part of a larger image or convolving the image with 
a given kernel or applying a transformation to it.
 
 @menu
 * Crop::                        Crop region(s) from a dataset.
@@ -12123,12 +8994,10 @@ transformation to it.
 @cindex Postage stamp images
 @cindex Large astronomical images
 @pindex @r{Crop (}astcrop@r{)}
-Astronomical images are often very large, filled with thousands of
-galaxies. It often happens that you only want a section of the image, or
-you have a catalog of sources and you want to visually analyze them in
-small postage stamps. Crop is made to do all these things. When more than
-one crop is required, Crop will divide the crops between multiple threads
-to significantly reduce the run time.
+Astronomical images are often very large, filled with thousands of galaxies.
+It often happens that you only want a section of the image, or you have a 
catalog of sources and you want to visually analyze them in small postage 
stamps.
+Crop is made to do all these things.
+When more than one crop is required, Crop will divide the crops between 
multiple threads to significantly reduce the run time.
 
 @cindex Mosaicing
 @cindex Image tiles
@@ -12136,29 +9005,19 @@ to significantly reduce the run time.
 @cindex COSMOS survey
 @cindex Imaging surveys
 @cindex Hubble Space Telescope (HST)
-Astronomical surveys are usually extremely large. So large in fact, that
-the whole survey will not fit into a reasonably sized file. Because of
-this, surveys usually cut the final image into separate tiles and store
-each tile in a file. For example the COSMOS survey's Hubble space
-telescope, ACS F814W image consists of 81 separate FITS images, with each
-one having a volume of 1.7 Giga bytes.
+Astronomical surveys are usually extremely large.
+So large in fact, that the whole survey will not fit into a reasonably sized 
file.
+Because of this, surveys usually cut the final image into separate tiles and 
store each tile in a file.
+For example the COSMOS survey's Hubble space telescope, ACS F814W image 
consists of 81 separate FITS images, with each one having a volume of 1.7 Giga 
bytes.
 
 @cindex Stitch multiple images
-Even though the tile sizes are chosen to be large enough that too many
-galaxies/targets don't fall on the edges of the tiles, inevitably some
-do. So when you simply crop the image of such targets from one tile, you
-will miss a large area of the surrounding sky (which is essential in
-estimating the noise). Therefore in its WCS mode, Crop will stitch parts of
-the tiles that are relevant for a target (with the given width) from all
-the input images that cover that region into the output. Of course, the
-tiles have to be present in the list of input files.
-
-Besides cropping postage stamps around certain coordinates, Crop can also
-crop arbitrary polygons from an image (or a set of tiles by stitching the
-relevant parts of different tiles within the polygon), see
-@option{--polygon} in @ref{Invoking astcrop}. Alternatively, it can crop
-out rectangular regions through the @option{--section} option from one
-image, see @ref{Crop section syntax}.
+Even though the tile sizes are chosen to be large enough that too many 
galaxies/targets don't fall on the edges of the tiles, inevitably some do.
+So when you simply crop the image of such targets from one tile, you will miss 
a large area of the surrounding sky (which is essential in estimating the 
noise).
+Therefore in its WCS mode, Crop will stitch parts of the tiles that are 
relevant for a target (with the given width) from all the input images that 
cover that region into the output.
+Of course, the tiles have to be present in the list of input files.
+
+Besides cropping postage stamps around certain coordinates, Crop can also crop 
arbitrary polygons from an image (or a set of tiles by stitching the relevant 
parts of different tiles within the polygon), see @option{--polygon} in 
@ref{Invoking astcrop}.
+Alternatively, it can crop out rectangular regions through the 
@option{--section} option from one image, see @ref{Crop section syntax}.
 
 @menu
 * Crop modes::                  Basic modes to define crop region.
@@ -12169,14 +9028,12 @@ image, see @ref{Crop section syntax}.
 
 @node Crop modes, Crop section syntax, Crop, Crop
 @subsection Crop modes
-In order to be comprehensive, intuitive, and easy to use, there are two
-ways to define the crop:
+In order to be comprehensive, intuitive, and easy to use, there are two ways 
to define the crop:
 
 @enumerate
 @item
-From its center and side length. For example if you already know the
-coordinates of an object and want to inspect it in an image or to generate
-postage stamps of a catalog containing many such coordinates.
+From its center and side length.
+For example if you already know the coordinates of an object and want to 
inspect it in an image or to generate postage stamps of a catalog containing 
many such coordinates.
 
 @item
 The vertices of the crop region, this can be useful for larger crops over
@@ -12184,227 +9041,156 @@ many targets, for example to crop out a uniformly 
deep, or contiguous,
 region of a large survey.
 @end enumerate
 
-Irrespective of how the crop region is defined, the coordinates to define
-the crop can be in Image (pixel) or World Coordinate System (WCS)
-standards. All coordinates are read as floating point numbers (not
-integers, except for the @option{--section} option, see below). By setting
-the @emph{mode} in Crop, you define the standard that the given coordinates
-must be interpreted. Here, the different ways to specify the crop region
-are discussed within each standard. For the full list options, please see
-@ref{Invoking astcrop}.
-
-When the crop is defined by its center, the respective (integer) central
-pixel position will be found internally according to the FITS standard. To
-have this pixel positioned in the center of the cropped region, the final
-cropped region will have an add number of pixels (even if you give an even
-number to @option{--width} in image mode).
-
-Furthermore, when the crop is defined as by its center, Crop allows you to
-only keep crops what don't have any blank pixels in the vicinity of their
-center (your primary target). This can be very convenient when your input
-catalog/coordinates originated from another survey/filter which is not
-fully covered by your input image, to learn more about this feature, please
-see the description of the @option{--checkcenter} option in @ref{Invoking
-astcrop}.
+Irrespective of how the crop region is defined, the coordinates to define the 
crop can be in Image (pixel) or World Coordinate System (WCS) standards.
+All coordinates are read as floating point numbers (not integers, except for 
the @option{--section} option, see below).
+By setting the @emph{mode} in Crop, you define the standard that the given 
coordinates must be interpreted.
+Here, the different ways to specify the crop region are discussed within each 
standard.
+For the full list options, please see @ref{Invoking astcrop}.
+
+When the crop is defined by its center, the respective (integer) central pixel 
position will be found internally according to the FITS standard.
+To have this pixel positioned in the center of the cropped region, the final 
cropped region will have an add number of pixels (even if you give an even 
number to @option{--width} in image mode).
+
+Furthermore, when the crop is defined as by its center, Crop allows you to 
only keep crops what don't have any blank pixels in the vicinity of their 
center (your primary target).
+This can be very convenient when your input catalog/coordinates originated 
from another survey/filter which is not fully covered by your input image, to 
learn more about this feature, please see the description of the 
@option{--checkcenter} option in @ref{Invoking astcrop}.
 
 @table @asis
 @item Image coordinates
-In image mode (@option{--mode=img}), Crop interprets the pixel coordinates
-and widths in units of the input data-elements (for example pixels in an
-image, not world coordinates). In image mode, only one image may be
-input. The output crop(s) can be defined in multiple ways as listed below.
+In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and 
widths in units of the input data-elements (for example pixels in an image, not 
world coordinates).
+In image mode, only one image may be input.
+The output crop(s) can be defined in multiple ways as listed below.
 
 @table @asis
 @item Center of multiple crops (in a catalog)
-The center of (possibly multiple) crops are read from a text file. In this
-mode, the columns identified with the @option{--coordcol} option are
-interpreted as the center of a crop with a width of @option{--width} pixels
-along each dimension. The columns can contain any floating point value. The
-value to @option{--output} option is seen as a directory which will host
-(the possibly multiple) separate crop files, see @ref{Crop output} for
-more. For a tutorial using this feature, please see @ref{Finding reddest
-clumps and visual inspection}.
+The center of (possibly multiple) crops are read from a text file.
+In this mode, the columns identified with the @option{--coordcol} option are 
interpreted as the center of a crop with a width of @option{--width} pixels 
along each dimension.
+The columns can contain any floating point value.
+The value to @option{--output} option is seen as a directory which will host 
(the possibly multiple) separate crop files, see @ref{Crop output} for more.
+For a tutorial using this feature, please see @ref{Finding reddest clumps and 
visual inspection}.
 
 @item Center of a single crop (on the command-line)
-The center of the crop is given on the command-line with the
-@option{--center} option. The crop width is specified by the
-@option{--width} option along each dimension. The given coordinates and
-width can be any floating point number.
+The center of the crop is given on the command-line with the @option{--center} 
option.
+The crop width is specified by the @option{--width} option along each 
dimension.
+The given coordinates and width can be any floating point number.
 
 @item Vertices of a single crop
-In Image mode there are two options to define the vertices of a region to
-crop: @option{--section} and @option{--polygon}. The former is lower-level
-(doesn't accept floating point vertices, and only a rectangular region can
-be defined), it is also only available in Image mode. Please see @ref{Crop
-section syntax} for a full description of this method.
-
-The latter option (@option{--polygon}) is a higher-level method to define
-any convex polygon (with any number of vertices) with floating point
-values. Please see the description of this option in @ref{Invoking astcrop}
-for its syntax.
+In Image mode there are two options to define the vertices of a region to 
crop: @option{--section} and @option{--polygon}.
+The former is lower-level (doesn't accept floating point vertices, and only a 
rectangular region can be defined), it is also only available in Image mode.
+Please see @ref{Crop section syntax} for a full description of this method.
+
+The latter option (@option{--polygon}) is a higher-level method to define any 
convex polygon (with any number of vertices) with floating point values.
+Please see the description of this option in @ref{Invoking astcrop} for its 
syntax.
 @end table
 
 @item WCS coordinates
-In WCS mode (@option{--mode=wcs}), the coordinates and widths are
-interpreted using the World Coordinate System (WCS, that must accompany
-the dataset), not pixel coordinates. In WCS mode, Crop accepts multiple
-datasets as input. When the cropped region (defined by its center or
-vertices) overlaps with multiple of the input images/tiles, the overlapping
-regions will be taken from the respective input (they will be stitched when
-necessary for each output crop).
-
-In this mode, the input images do not necessarily have to be the same size,
-they just need to have the same orientation and pixel resolution. Currently
-only orientation along the celestial coordinates is accepted, if your input
-has a different orientation you can use Warp's @option{--align} option to
-align the image before cropping it (see @ref{Warp}).
-
-Each individual input image/tile can even be smaller than the final
-crop. In any case, any part of any of the input images which overlaps with
-the desired region will be used in the crop. Note that if there is an
-overlap in the input images/tiles, the pixels from the last input image
-read are going to be used for the overlap. Crop will not change pixel
-values, so it assumes your overlapping tiles were cutout from the same
-original image. There are multiple ways to define your cropped region as
-listed below.
+In WCS mode (@option{--mode=wcs}), the coordinates and widths are interpreted 
using the World Coordinate System (WCS, that must accompany the dataset), not 
pixel coordinates.
+In WCS mode, Crop accepts multiple datasets as input.
+When the cropped region (defined by its center or vertices) overlaps with 
multiple of the input images/tiles, the overlapping regions will be taken from 
the respective input (they will be stitched when necessary for each output 
crop).
+
+In this mode, the input images do not necessarily have to be the same size, 
they just need to have the same orientation and pixel resolution.
+Currently only orientation along the celestial coordinates is accepted, if 
your input has a different orientation you can use Warp's @option{--align} 
option to align the image before cropping it (see @ref{Warp}).
+
+Each individual input image/tile can even be smaller than the final crop.
+In any case, any part of any of the input images which overlaps with the 
desired region will be used in the crop.
+Note that if there is an overlap in the input images/tiles, the pixels from 
the last input image read are going to be used for the overlap.
+Crop will not change pixel values, so it assumes your overlapping tiles were 
cutout from the same original image.
+There are multiple ways to define your cropped region as listed below.
 
 @table @asis
 
 @item Center of multiple crops (in a catalog)
-Similar to catalog inputs in Image mode (above), except that the values
-along each dimension are assumed to have the same units as the dataset's
-WCS information. For example, the central RA and Dec value for each crop
-will be read from the first and second calls to the @option{--coordcol}
-option. The width of the cropped box (in units of the WCS, or degrees in RA
-and Dec mode) must be specified with the @option{--width} option.
+Similar to catalog inputs in Image mode (above), except that the values along 
each dimension are assumed to have the same units as the dataset's WCS 
information.
+For example, the central RA and Dec value for each crop will be read from the 
first and second calls to the @option{--coordcol} option.
+The width of the cropped box (in units of the WCS, or degrees in RA and Dec 
mode) must be specified with the @option{--width} option.
 
 @item Center of a single crop (on the command-line)
-You can specify the center of only one crop box with the @option{--center}
-option. If it exists in the input images, it will be cropped similar to the
-catalog mode, see above also for @code{--width}.
+You can specify the center of only one crop box with the @option{--center} 
option.
+If it exists in the input images, it will be cropped similar to the catalog 
mode, see above also for @code{--width}.
 
 @item Vertices of a single crop
-The @option{--polygon} option is a high-level method to define any convex
-polygon (with any number of vertices). Please see the description of this
-option in @ref{Invoking astcrop} for its syntax.
+The @option{--polygon} option is a high-level method to define any convex 
polygon (with any number of vertices).
+Please see the description of this option in @ref{Invoking astcrop} for its 
syntax.
 @end table
 
 @cartouche
 @noindent
-@strong{CAUTION:} In WCS mode, the image has to be aligned with the
-celestial coordinates, such that the first FITS axis is parallel (opposite
-direction) to the Right Ascension (RA) and the second FITS axis is parallel
-to the declination. If these conditions aren't met for an image, Crop will
-warn you and abort. You can use Warp's @option{--align} option to align the
-input image with these coordinates, see @ref{Warp}.
+@strong{CAUTION:} In WCS mode, the image has to be aligned with the celestial 
coordinates, such that the first FITS axis is parallel (opposite direction) to 
the Right Ascension (RA) and the second FITS axis is parallel to the 
declination.
+If these conditions aren't met for an image, Crop will warn you and abort.
+You can use Warp's @option{--align} option to align the input image with these 
coordinates, see @ref{Warp}.
 @end cartouche
 
 @end table
 
-As a summary, if you don't specify a catalog, you have to define the
-cropped region manually on the command-line.  In any case the mode is
-mandatory for Crop to be able to interpret the values given as coordinates
-or widths.
+As a summary, if you don't specify a catalog, you have to define the cropped 
region manually on the command-line.
+In any case the mode is mandatory for Crop to be able to interpret the values 
given as coordinates or widths.
 
 
 @node Crop section syntax, Blank pixels, Crop modes, Crop
 @subsection Crop section syntax
 
 @cindex Crop a given section of image
-When in image mode, one of the methods to crop only one rectangular section
-from the input image is to use the @option{--section} option. Crop has a
-powerful syntax to read the box parameters from a string of characters. If
-you leave certain parts of the string to be empty, Crop can fill them for
-you based on the input image sizes.
+When in image mode, one of the methods to crop only one rectangular section 
from the input image is to use the @option{--section} option.
+Crop has a powerful syntax to read the box parameters from a string of 
characters.
+If you leave certain parts of the string to be empty, Crop can fill them for 
you based on the input image sizes.
 
 @cindex Define section to crop
-To define a box, you need the coordinates of two points: the first
-(@code{X1}, @code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel
-positions in the image, or four integer numbers in total. The four
-coordinates can be specified with one string in this format:
-`@command{X1:X2,Y1:Y2}'. This string is given to the @option{--section}
-option. Therefore, the pixels along the first axis that are
-@mymath{\geq}@command{X1} and @mymath{\leq}@command{X2} will be included in
-the cropped image. The same goes for the second axis. Note that each
-different term will be read as an integer, not a float. This is a low-level
-option, for a higher-level way to specify region (any polygon, not just a
-box), please see the @option{--polygon} option in @ref{Crop options}. Also
-note that in the FITS standard, pixel indexes along each axis start from
-unity(1) not zero(0).
+To define a box, you need the coordinates of two points: the first (@code{X1}, 
@code{Y1}) and the last pixel (@code{X2}, @code{Y2}) pixel positions in the 
image, or four integer numbers in total.
+The four coordinates can be specified with one string in this format: 
`@command{X1:X2,Y1:Y2}'.
+This string is given to the @option{--section} option.
+Therefore, the pixels along the first axis that are @mymath{\geq}@command{X1} 
and @mymath{\leq}@command{X2} will be included in the cropped image.
+The same goes for the second axis.
+Note that each different term will be read as an integer, not a float.
+This is a low-level option, for a higher-level way to specify region (any 
polygon, not just a box), please see the @option{--polygon} option in @ref{Crop 
options}.
+Also note that in the FITS standard, pixel indexes along each axis start from 
unity(1) not zero(0).
 
 @cindex Crop section format
 You can omit any of the values and they will be filled automatically.
-The left hand side of the colon (@command{:}) will be filled with
-@command{1}, and the right side with the image size. So, @command{2:,:}
-will include the full range of pixels along the second axis and only
-those with a first axis index larger than @command{2} in the first
-axis. If the colon is omitted for a dimension, then the full range is
-automatically used. So the same string is also equal to @command{2:,}
-or @command{2:} or even @command{2}. If you want such a case for the
-second axis, you should set it to: @command{,2}.
-
-If you specify a negative value, it will be seen as before the indexes of
-the image which are outside the image along the bottom or left sides when
-viewed in SAO ds9. In case you want to count from the top or right sides of
-the image, you can use an asterisk (@option{*}). When confronted with a
-@option{*}, Crop will replace it with the maximum length of the image in
-that dimension. So @command{*-10:*+10,*-20:*+20} will mean that the crop
-box will be @math{20\times40} pixels in size and only include the top
-corner of the input image with 3/4 of the image being covered by blank
-pixels, see @ref{Blank pixels}.
-
-If you feel more comfortable with space characters between the values, you
-can use as many space characters as you wish, just be careful to put your
-value in double quotes, for example @command{--section="5:200,
-123:854"}. If you forget the quotes, anything after the first space will
-not be seen by @option{--section} and you will most probably get an error
-because the rest of your string will be read as a filename (which most
-probably doesn't exist). See @ref{Command-line} for a description of how
-the command-line works.
+The left hand side of the colon (@command{:}) will be filled with @command{1}, 
and the right side with the image size.
+So, @command{2:,:} will include the full range of pixels along the second axis 
and only those with a first axis index larger than @command{2} in the first 
axis.
+If the colon is omitted for a dimension, then the full range is automatically 
used.
+So the same string is also equal to @command{2:,} or @command{2:} or even 
@command{2}.
+If you want such a case for the second axis, you should set it to: 
@command{,2}.
+
+If you specify a negative value, it will be seen as before the indexes of the 
image which are outside the image along the bottom or left sides when viewed in 
SAO ds9.
+In case you want to count from the top or right sides of the image, you can 
use an asterisk (@option{*}).
+When confronted with a @option{*}, Crop will replace it with the maximum 
length of the image in that dimension.
+So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be 
@math{20\times40} pixels in size and only include the top corner of the input 
image with 3/4 of the image being covered by blank pixels, see @ref{Blank 
pixels}.
+
+If you feel more comfortable with space characters between the values, you can 
use as many space characters as you wish, just be careful to put your value in 
double quotes, for example @command{--section="5:200, 123:854"}.
+If you forget the quotes, anything after the first space will not be seen by 
@option{--section} and you will most probably get an error because the rest of 
your string will be read as a filename (which most probably doesn't exist).
+See @ref{Command-line} for a description of how the command-line works.
 
 
 @node Blank pixels, Invoking astcrop, Crop section syntax, Crop
 @subsection Blank pixels
 
 @cindex Blank pixel
-The cropped box can potentially include pixels that are beyond the image
-range. For example when a target in the input catalog was very near the
-edge of the input image. The parts of the cropped image that were not in
-the input image will be filled with the following two values depending on
-the data type of the image. In both cases, SAO ds9 will not color code
-those pixels.
+The cropped box can potentially include pixels that are beyond the image range.
+For example when a target in the input catalog was very near the edge of the 
input image.
+The parts of the cropped image that were not in the input image will be filled 
with the following two values depending on the data type of the image.
+In both cases, SAO ds9 will not color code those pixels.
 @itemize
 @item
-If the data type of the image is a floating point type (float or
-double), IEEE NaN (Not a number) will be used.
+If the data type of the image is a floating point type (float or double), IEEE 
NaN (Not a number) will be used.
 @item
-For integer types, pixels out of the image will be filled with the
-value of the @command{BLANK} keyword in the cropped image header. The
-value assigned to it is the lowest value possible for that type, so
-you will probably never need it any way. Only for the unsigned
-character type (@command{BITPIX=8} in the FITS header), the maximum
-value is used because it is unsigned, the smallest value is zero which
-is often meaningful.
+For integer types, pixels out of the image will be filled with the value of 
the @command{BLANK} keyword in the cropped image header.
+The value assigned to it is the lowest value possible for that type, so you 
will probably never need it any way.
+Only for the unsigned character type (@command{BITPIX=8} in the FITS header), 
the maximum value is used because it is unsigned, the smallest value is zero 
which is often meaningful.
 @end itemize
-You can ask for such blank regions to not be included in the output
-crop image using the @option{--noblank} option. In such cases, there
-is no guarantee that the image size of your outputs are what you asked
-for.
+You can ask for such blank regions to not be included in the output crop image 
using the @option{--noblank} option.
+In such cases, there is no guarantee that the image size of your outputs are 
what you asked for.
 
-In some survey images, unfortunately they do not use the
-@command{BLANK} FITS keyword. Instead they just give all pixels
-outside of the survey area a value of zero. So by default, when
-dealing with float or double image types, any values that are 0.0 are
-also regarded as blank regions. This can be turned off with the
-@option{--zeroisnotblank} option.
+In some survey images, unfortunately they do not use the @command{BLANK} FITS 
keyword.
+Instead they just give all pixels outside of the survey area a value of zero.
+So by default, when dealing with float or double image types, any values that 
are 0.0 are also regarded as blank regions.
+This can be turned off with the @option{--zeroisnotblank} option.
 
 
 @node Invoking astcrop,  , Blank pixels, Crop
 @subsection Invoking Crop
 
-Crop will crop a region from an image. If in WCS mode, it will also
-stitch parts from separate images in the input files. The executable name
-is @file{astcrop} with the following general template
+Crop will crop a region from an image.
+If in WCS mode, it will also stitch parts from separate images in the input 
files.
+The executable name is @file{astcrop} with the following general template
 
 @example
 $ astcrop [OPTION...] [ASCIIcatalog] ASTRdata ...
@@ -12433,43 +9219,30 @@ $ astcrop --mode=img --center=568.342,2091.719 
--width=201 image.fits
 @end example
 
 @noindent
-Crop has one mandatory argument which is the input image name(s), shown
-above with @file{ASTRdata ...}. You can use shell expansions, for example
-@command{*} for this if you have lots of images in WCS mode. If the crop
-box centers are in a catalog, you can use the @option{--catalog} option. In
-other cases, you have to provide the single cropped output parameters must
-be given with command-line options. See @ref{Crop output} for how the
-output file name(s) can be specified. For the full list of general options
-to all Gnuastro programs (including Crop), please see @ref{Common options}.
-
-Floating point numbers can be used to specify the crop region (except the
-@option{--section} option, see @ref{Crop section syntax}). In such cases,
-the floating point values will be used to find the desired integer pixel
-indices based on the FITS standard. Hence, Crop ultimately doesn't do any
-sub-pixel cropping (in other words, it doesn't change pixel values). If you
-need such crops, you can use @ref{Warp} to first warp the image to the a
-new pixel grid, then crop from that. For example, let's assume you want a
-crop from pixels 12.982 to 80.982 along the first dimension. You should
-first translate the image by @mymath{-0.482} (note that the edge of a pixel
-is at integer multiples of @mymath{0.5}). So you should run Warp with
-@option{--translate=-0.482,0} and then crop the warped image with
-@option{--section=13:81}.
-
-There are two ways to define the cropped region: with its center or its
-vertices. See @ref{Crop modes} for a full description. In the former case,
-Crop can check if the central region of the cropped image is indeed filled
-with data or is blank (see @ref{Blank pixels}), and not produce any output
-when the center is blank, see the description under @option{--checkcenter}
-for more.
+Crop has one mandatory argument which is the input image name(s), shown above 
with @file{ASTRdata ...}.
+You can use shell expansions, for example @command{*} for this if you have 
lots of images in WCS mode.
+If the crop box centers are in a catalog, you can use the @option{--catalog} 
option.
+In other cases, you have to provide the single cropped output parameters must 
be given with command-line options.
+See @ref{Crop output} for how the output file name(s) can be specified.
+For the full list of general options to all Gnuastro programs (including 
Crop), please see @ref{Common options}.
+
+Floating point numbers can be used to specify the crop region (except the 
@option{--section} option, see @ref{Crop section syntax}).
+In such cases, the floating point values will be used to find the desired 
integer pixel indices based on the FITS standard.
+Hence, Crop ultimately doesn't do any sub-pixel cropping (in other words, it 
doesn't change pixel values).
+If you need such crops, you can use @ref{Warp} to first warp the image to the 
a new pixel grid, then crop from that.
+For example, let's assume you want a crop from pixels 12.982 to 80.982 along 
the first dimension.
+You should first translate the image by @mymath{-0.482} (note that the edge of 
a pixel is at integer multiples of @mymath{0.5}).
+So you should run Warp with @option{--translate=-0.482,0} and then crop the 
warped image with @option{--section=13:81}.
+
+There are two ways to define the cropped region: with its center or its 
vertices.
+See @ref{Crop modes} for a full description.
+In the former case, Crop can check if the central region of the cropped image 
is indeed filled with data or is blank (see @ref{Blank pixels}), and not 
produce any output when the center is blank, see the description under 
@option{--checkcenter} for more.
 
 @cindex Asynchronous thread allocation
-When in catalog mode, Crop will run in parallel unless you set
-@option{--numthreads=1}, see @ref{Multi-threaded operations}. Note that
-when multiple outputs are created with threads, the outputs will not be
-created in the same order. This is because the threads are asynchronous and
-thus not started in order. This has no effect on each output, see
-@ref{Finding reddest clumps and visual inspection} for a tutorial on
-effectively using this feature.
+When in catalog mode, Crop will run in parallel unless you set 
@option{--numthreads=1}, see @ref{Multi-threaded operations}.
+Note that when multiple outputs are created with threads, the outputs will not 
be created in the same order.
+This is because the threads are asynchronous and thus not started in order.
+This has no effect on each output, see @ref{Finding reddest clumps and visual 
inspection} for a tutorial on effectively using this feature.
 
 @menu
 * Crop options::                A list of all the options with explanation.
@@ -12479,19 +9252,13 @@ effectively using this feature.
 @node Crop options, Crop output, Invoking astcrop, Invoking astcrop
 @subsubsection Crop options
 
-The options can be classified into the following contexts: Input,
-Output and operating mode options. Options that are common to all
-Gnuastro program are listed in @ref{Common options} and will not be
-repeated here.
+The options can be classified into the following contexts: Input, Output and 
operating mode options.
+Options that are common to all Gnuastro program are listed in @ref{Common 
options} and will not be repeated here.
 
-When you are specifying the crop vertices your self (through
-@option{--section}, or @option{--polygon}) on relatively small regions
-(depending on the resolution of your images) the outputs from image and WCS
-mode can be approximately equivalent. However, as the crop sizes get large,
-the curved nature of the WCS coordinates have to be considered. For
-example, when using @option{--section}, the right ascension of the bottom
-left and top left corners will not be equal. If you only want regions
-within a given right ascension, use @option{--polygon} in WCS mode.
+When you are specifying the crop vertices your self (through 
@option{--section}, or @option{--polygon}) on relatively small regions 
(depending on the resolution of your images) the outputs from image and WCS 
mode can be approximately equivalent.
+However, as the crop sizes get large, the curved nature of the WCS coordinates 
have to be considered.
+For example, when using @option{--section}, the right ascension of the bottom 
left and top left corners will not be equal.
+If you only want regions within a given right ascension, use 
@option{--polygon} in WCS mode.
 
 @noindent
 Input image parameters:
@@ -12499,29 +9266,20 @@ Input image parameters:
 
 @item --hstartwcs=INT
 @cindex CANDELS survey
-Specify the first keyword card (line number) to start finding the input
-image world coordinate system information. Distortions were only recently
-included in WCSLIB (from version 5). Therefore until now, different
-telescope would apply their own specific set of WCS keywords and put them
-into the image header along with those that WCSLIB does recognize. So now
-that WCSLIB recognizes most of the standard distortion parameters, they
-will get confused with the old ones and give completely wrong results. For
-example in the CANDELS-GOODS South
-images@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
-
-The two @option{--hstartwcs} and @option{--hendwcs} are thus provided
-so when using older datasets, you can specify what region in the FITS
-headers you want to use to read the WCS keywords. Note that this is
-only relevant for reading the WCS information, basic data information
-like the image size are read separately. These two options will only
-be considered when the value to @option{--hendwcs} is larger than that
-of @option{--hstartwcs}. So if they are equal or @option{--hstartwcs}
-is larger than @option{--hendwcs}, then all the input keywords will be
-parsed to get the WCS information of the image.
+Specify the first keyword card (line number) to start finding the input image 
world coordinate system information.
+Distortions were only recently included in WCSLIB (from version 5).
+Therefore until now, different telescope would apply their own specific set of 
WCS keywords and put them into the image header along with those that WCSLIB 
does recognize.
+So now that WCSLIB recognizes most of the standard distortion parameters, they 
will get confused with the old ones and give completely wrong results.
+For example in the CANDELS-GOODS South 
images@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
+
+The two @option{--hstartwcs} and @option{--hendwcs} are thus provided so when 
using older datasets, you can specify what region in the FITS headers you want 
to use to read the WCS keywords.
+Note that this is only relevant for reading the WCS information, basic data 
information like the image size are read separately.
+These two options will only be considered when the value to @option{--hendwcs} 
is larger than that of @option{--hstartwcs}.
+So if they are equal or @option{--hstartwcs} is larger than 
@option{--hendwcs}, then all the input keywords will be parsed to get the WCS 
information of the image.
 
 @item --hendwcs=INT
-Specify the last keyword card to read for specifying the image world
-coordinate system on the input images. See @option{--hstartwcs}
+Specify the last keyword card to read for specifying the image world 
coordinate system on the input images.
+See @option{--hstartwcs}
 
 @end table
 
@@ -12531,71 +9289,51 @@ Crop box parameters:
 
 @item -c FLT[,FLT[,...]]
 @itemx --center=FLT[,FLT[,...]]
-The central position of the crop in the input image. The positions along
-each dimension must be separated by a comma (@key{,}) and fractions are
-also acceptable. The number of values given to this option must be the same
-as the dimensions of the input dataset. The width of the crop should be set
-with @code{--width}. The units of the coordinates are read based on the
-value to the @option{--mode} option, see below.
+The central position of the crop in the input image.
+The positions along each dimension must be separated by a comma (@key{,}) and 
fractions are also acceptable.
+The number of values given to this option must be the same as the dimensions 
of the input dataset.
+The width of the crop should be set with @code{--width}.
+The units of the coordinates are read based on the value to the 
@option{--mode} option, see below.
 
 @item -w FLT[,FLT[,...]]
 @itemx --width=FLT[,FLT[,...]]
-Width of the cropped region about its center. @option{--width} may take
-either a single value (to be used for all dimensions) or multiple values (a
-specific value for each dimension). If in WCS mode, value(s) given to this
-option will be read in the same units as the dataset's WCS information
-along this dimension. The final output will have an odd number of pixels to
-allow easy identification of the pixel which keeps your requested
-coordinate (from @option{--center} or @option{--catalog}).
-
-The @code{--width} option also accepts fractions. For example if you want
-the width of your crop to be 3 by 5 arcseconds along RA and Dec
-respectively, you can call it with: @option{--width=3/3600,5/3600}.
-
-If you want an even sided crop, you can run Crop afterwards with
-@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on
-which side you don't need), see @ref{Crop section syntax}.
+Width of the cropped region about its center.
+@option{--width} may take either a single value (to be used for all 
dimensions) or multiple values (a specific value for each dimension).
+If in WCS mode, value(s) given to this option will be read in the same units 
as the dataset's WCS information along this dimension.
+The final output will have an odd number of pixels to allow easy 
identification of the pixel which keeps your requested coordinate (from 
@option{--center} or @option{--catalog}).
+
+The @code{--width} option also accepts fractions.
+For example if you want the width of your crop to be 3 by 5 arcseconds along 
RA and Dec respectively, you can call it with: @option{--width=3/3600,5/3600}.
+
+If you want an even sided crop, you can run Crop afterwards with 
@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which 
side you don't need), see @ref{Crop section syntax}.
 
 @item -l STR
 @itemx --polygon=STR
-String of crop polygon vertices. Note that currently only convex polygons
-should be used. In the future we will make it work for all kinds of
-polygons. Convex polygons are polygons that do not have an internal angle
-more than 180 degrees. This option can be used both in the image and WCS
-modes, see @ref{Crop modes}. The cropped image will be the size of the
-rectangular region that completely encompasses the polygon. By default all
-the pixels that are outside of the polygon will be set as blank values (see
-@ref{Blank pixels}). However, if @option{--outpolygon} is called all pixels
-internal to the vertices will be set to blank.
-
-The syntax for the polygon vertices is similar to, and simpler than, that
-for @option{--section}. In short, the dimensions of each coordinate are
-separated by a comma (@key{,}) and each vertex is separated by a colon
-(@key{:}). You can define as many vertices as you like. If you would like
-to use space characters between the dimensions and vertices to make them
-more human-readable, then you have to put the value to this option in
-double quotation marks.
-
-For example, let's assume you want to work on the deepest part of the
-WFC3/IR images of Hubble Space Telescope eXtreme Deep Field
-(HST-XDF). @url{https://archive.stsci.edu/prepds/xdf/, According to the
-webpage}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest
-part is contained within the coordinates:
+String of crop polygon vertices.
+Note that currently only convex polygons should be used.
+In the future we will make it work for all kinds of polygons.
+Convex polygons are polygons that do not have an internal angle more than 180 
degrees.
+This option can be used both in the image and WCS modes, see @ref{Crop modes}.
+The cropped image will be the size of the rectangular region that completely 
encompasses the polygon.
+By default all the pixels that are outside of the polygon will be set as blank 
values (see @ref{Blank pixels}).
+However, if @option{--outpolygon} is called all pixels internal to the 
vertices will be set to blank.
+
+The syntax for the polygon vertices is similar to, and simpler than, that for 
@option{--section}.
+In short, the dimensions of each coordinate are separated by a comma (@key{,}) 
and each vertex is separated by a colon (@key{:}).
+You can define as many vertices as you like.
+If you would like to use space characters between the dimensions and vertices 
to make them more human-readable, then you have to put the value to this option 
in double quotation marks.
+
+For example, let's assume you want to work on the deepest part of the WFC3/IR 
images of Hubble Space Telescope eXtreme Deep Field (HST-XDF).
+@url{https://archive.stsci.edu/prepds/xdf/, According to the 
webpage}@footnote{@url{https://archive.stsci.edu/prepds/xdf/}} the deepest part 
is contained within the coordinates:
 
 @example
 [ (53.187414,-27.779152), (53.159507,-27.759633),
   (53.134517,-27.787144), (53.161906,-27.807208) ]
 @end example
 
-They have provided mask images with only these pixels in the WFC3/IR
-images, but what if you also need to work on the same region in the full
-resolution ACS images? Also what if you want to use the CANDELS data for
-the shallow region? Running Crop with @option{--polygon} will easily pull
-out this region of the image for you irrespective of the resolution. If you
-have set the operating mode to WCS mode in your nearest configuration file
-(see @ref{Configuration files}), there is no need to call
-@option{--mode=wcs} on the command line. You may also provide many FITS
-images/tiles and Crop will stitch them to produce this cropped region:
+They have provided mask images with only these pixels in the WFC3/IR images, 
but what if you also need to work on the same region in the full resolution ACS 
images? Also what if you want to use the CANDELS data for the shallow region? 
Running Crop with @option{--polygon} will easily pull out this region of the 
image for you irrespective of the resolution.
+If you have set the operating mode to WCS mode in your nearest configuration 
file (see @ref{Configuration files}), there is no need to call 
@option{--mode=wcs} on the command line.
+You may also provide many FITS images/tiles and Crop will stitch them to 
produce this cropped region:
 
 @example
 $ astcrop --mode=wcs desired-filter-image(s).fits           \
@@ -12603,30 +9341,19 @@ $ astcrop --mode=wcs desired-filter-image(s).fits      
     \
               53.134517,-27.787144 : 53.161906,-27.807208"
 @end example
 @cindex SAO DS9
-In other cases, you have an image and want to define the polygon yourself
-(it isn't already published like the example above). As the number of
-vertices increases, checking the vertex coordinates on a FITS viewer (for
-example SAO ds9) and typing them in one by one can be very tedious and
-prone to typo errors.
-
-You can take the following steps to avoid the frustration and possible
-typos: Open the image with ds9 and activate its ``region'' mode with
-@clicksequence{Edit@click{}Region}. Then define the region as a polygon
-with @clicksequence{Region@click{}Shape@click{}Polygon}. Click on the
-approximate center of the region you want and a small square will
-appear. By clicking on the vertices of the square you can shrink or expand
-it, clicking and dragging anywhere on the edges will enable you to define a
-new vertex. After the region has been nicely defined, save it as a file
-with @clicksequence{Region@click{}Save Regions}. You can then select the
-name and address of the output file, keep the format as @command{REG} and
-press ``OK''. In the next window, keep format as ``ds9'' and ``Coordinate
-System'' as ``fk5''. A plain text file (let's call it @file{ds9.reg}) is
-now created.
-
-You can now convert this plain text file to Crop's polygon format with this
-command (when typing on the command-line, ignore the ``@key{\}'' at the end
-of the first and second lines along with the extra spaces, these are only
-for nice printing):
+In other cases, you have an image and want to define the polygon yourself (it 
isn't already published like the example above).
+As the number of vertices increases, checking the vertex coordinates on a FITS 
viewer (for example SAO ds9) and typing them in one by one can be very tedious 
and prone to typo errors.
+
+You can take the following steps to avoid the frustration and possible typos: 
Open the image with ds9 and activate its ``region'' mode with 
@clicksequence{Edit@click{}Region}.
+Then define the region as a polygon with 
@clicksequence{Region@click{}Shape@click{}Polygon}.
+Click on the approximate center of the region you want and a small square will 
appear.
+By clicking on the vertices of the square you can shrink or expand it, 
clicking and dragging anywhere on the edges will enable you to define a new 
vertex.
+After the region has been nicely defined, save it as a file with 
@clicksequence{Region@click{}Save Regions}.
+You can then select the name and address of the output file, keep the format 
as @command{REG} and press ``OK''.
+In the next window, keep format as ``ds9'' and ``Coordinate System'' as 
``fk5''.
+A plain text file (let's call it @file{ds9.reg}) is now created.
+
+You can now convert this plain text file to Crop's polygon format with this 
command (when typing on the command-line, ignore the ``@key{\}'' at the end of 
the first and second lines along with the extra spaces, these are only for nice 
printing):
 
 @example
 $ v=$(awk 'NR==4' ds9.reg | sed -e's/polygon(//'        \
@@ -12635,43 +9362,34 @@ $ astcrop --mode=wcs image.fits --polygon=$v
 @end example
 
 @item --outpolygon
-Keep all the regions outside the polygon and mask the inner ones with blank
-pixels (see @ref{Blank pixels}). This is practically the inverse of the
-default mode of treating polygons. Note that this option only works when
-you have only provided one input image. If multiple images are given (in
-WCS mode), then the full area covered by all the images has to be shown and
-the polygon excluded. This can lead to a very large area if large surveys
-like COSMOS are used. So Crop will abort and notify you. In such cases, it
-is best to crop out the larger region you want, then mask the smaller
-region with this option.
+Keep all the regions outside the polygon and mask the inner ones with blank 
pixels (see @ref{Blank pixels}).
+This is practically the inverse of the default mode of treating polygons.
+Note that this option only works when you have only provided one input image.
+If multiple images are given (in WCS mode), then the full area covered by all 
the images has to be shown and the polygon excluded.
+This can lead to a very large area if large surveys like COSMOS are used.
+So Crop will abort and notify you.
+In such cases, it is best to crop out the larger region you want, then mask 
the smaller region with this option.
 
 @item -s STR
 @itemx --section=STR
-Section of the input image which you want to be cropped. See @ref{Crop
-section syntax} for a complete explanation on the syntax required for this
-input.
+Section of the input image which you want to be cropped.
+See @ref{Crop section syntax} for a complete explanation on the syntax 
required for this input.
 
 @item -x STR/INT
 @itemx --coordcol=STR/INT
-The column in a catalog to read as a coordinate. The value can be either
-the column number (starting from 1), or a match/search in the table
-meta-data, see @ref{Selecting table columns}. This option must be called
-multiple times, depending on the number of dimensions in the input
-dataset. If it is called more than necessary, the extra columns (later
-calls to this option on the command-line or configuration files) will be
-ignored, see @ref{Configuration file precedence}.
+The column in a catalog to read as a coordinate.
+The value can be either the column number (starting from 1), or a match/search 
in the table meta-data, see @ref{Selecting table columns}.
+This option must be called multiple times, depending on the number of 
dimensions in the input dataset.
+If it is called more than necessary, the extra columns (later calls to this 
option on the command-line or configuration files) will be ignored, see 
@ref{Configuration file precedence}.
 
 @item -n STR/INT
 @item --namecol=STR/INT
-Column selection of crop file name. The value can be either the column
-number (starting from 1), or a match/search in the table meta-data, see
-@ref{Selecting table columns}. This option can be used both in Image and
-WCS modes, and not a mandatory. When a column is given to this option, the
-final crop base file name will be taken from the contents of this
-column. The directory will be determined by the @option{--output} option
-(current directory if not given) and the value to @option{--suffix} will be
-appended. When this column isn't given, the row number will be used
-instead.
+Column selection of crop file name.
+The value can be either the column number (starting from 1), or a match/search 
in the table meta-data, see @ref{Selecting table columns}.
+This option can be used both in Image and WCS modes, and not a mandatory.
+When a column is given to this option, the final crop base file name will be 
taken from the contents of this column.
+The directory will be determined by the @option{--output} option (current 
directory if not given) and the value to @option{--suffix} will be appended.
+When this column isn't given, the row number will be used instead.
 
 @end table
 
@@ -12682,57 +9400,40 @@ Output options:
 @item -c FLT/INT
 @itemx --checkcenter=FLT/INT
 @cindex Check center of crop
-Square box width of region in the center of the image to check for blank
-values. If any of the pixels in this central region of a crop (defined by
-its center) are blank, then it will not be stored in an output file. If the
-value to this option is zero, no checking is done. This check is only
-applied when the cropped region(s) are defined by their center (not by the
-vertices, see @ref{Crop modes}).
-
-The units of the value are interpreted based on the @option{--mode} value
-(in WCS or pixel units). The ultimate checked region size (in pixels) will
-be an odd integer around the center (converted from WCS, or when an even
-number of pixels are given to this option). In WCS mode, the value can be
-given as fractions, for example if the WCS units are in degrees,
-@code{0.1/3600} will correspond to a check size of 0.1 arcseconds.
-
-Because survey regions don't often have a clean square or rectangle shape,
-some of the pixels on the sides of the survey FITS image don't commonly
-have any data and are blank (see @ref{Blank pixels}). So when the catalog
-was not generated from the input image, it often happens that the image
-does not have data over some of the points.
-
-When the given center of a crop falls in such regions or outside the
-dataset, and this option has a non-zero value, no crop will be
-created. Therefore with this option, you can specify a width of a small box
-(3 pixels is often good enough) around the central pixel of the cropped
-image. You can check which crops were created and which weren't from the
-command-line (if @option{--quiet} was not called, see @ref{Operating mode
-options}), or in Crop's log file (see @ref{Crop output}).
+Square box width of region in the center of the image to check for blank 
values.
+If any of the pixels in this central region of a crop (defined by its center) 
are blank, then it will not be stored in an output file.
+If the value to this option is zero, no checking is done.
+This check is only applied when the cropped region(s) are defined by their 
center (not by the vertices, see @ref{Crop modes}).
+
+The units of the value are interpreted based on the @option{--mode} value (in 
WCS or pixel units).
+The ultimate checked region size (in pixels) will be an odd integer around the 
center (converted from WCS, or when an even number of pixels are given to this 
option).
+In WCS mode, the value can be given as fractions, for example if the WCS units 
are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 
arcseconds.
+
+Because survey regions don't often have a clean square or rectangle shape, 
some of the pixels on the sides of the survey FITS image don't commonly have 
any data and are blank (see @ref{Blank pixels}).
+So when the catalog was not generated from the input image, it often happens 
that the image does not have data over some of the points.
+
+When the given center of a crop falls in such regions or outside the dataset, 
and this option has a non-zero value, no crop will be created.
+Therefore with this option, you can specify a width of a small box (3 pixels 
is often good enough) around the central pixel of the cropped image.
+You can check which crops were created and which weren't from the command-line 
(if @option{--quiet} was not called, see @ref{Operating mode options}), or in 
Crop's log file (see @ref{Crop output}).
 
 @item -p STR
 @itemx --suffix=STR
-The suffix (or post-fix) of the output files for when you want all the
-cropped images to have a special ending. One case where this might be
-helpful is when besides the science images, you want the weight images (or
-exposure maps, which are also distributed with survey images) of the
-cropped regions too. So in one run, you can set the input images to the
-science images and @option{--suffix=_s.fits}. In the next run you can set
-the weight images as input and @option{--suffix=_w.fits}.
+The suffix (or post-fix) of the output files for when you want all the cropped 
images to have a special ending.
+One case where this might be helpful is when besides the science images, you 
want the weight images (or exposure maps, which are also distributed with 
survey images) of the cropped regions too.
+So in one run, you can set the input images to the science images and 
@option{--suffix=_s.fits}.
+In the next run you can set the weight images as input and 
@option{--suffix=_w.fits}.
 
 @item -b
 @itemx --noblank
-Pixels outside of the input image that are in the crop box will not be
-used. By default they are filled with blank values (depending on type), see
-@ref{Blank pixels}. This option only applies only in Image mode, see
-@ref{Crop modes}.
+Pixels outside of the input image that are in the crop box will not be used.
+By default they are filled with blank values (depending on type), see 
@ref{Blank pixels}.
+This option only applies only in Image mode, see @ref{Crop modes}.
 
 @item -z
 @itemx --zeroisnotblank
-In float or double images, it is common to give the value of zero to
-blank pixels. If the input image type is one of these two types, such
-pixels will also be considered as blank. You can disable this behavior
-with this option, see @ref{Blank pixels}.
+In float or double images, it is common to give the value of zero to blank 
pixels.
+If the input image type is one of these two types, such pixels will also be 
considered as blank.
+You can disable this behavior with this option, see @ref{Blank pixels}.
 
 @end table
 
@@ -12742,9 +9443,8 @@ Operating mode options:
 
 @item -O STR
 @itemx --mode=STR
-Operate in Image mode or WCS mode when the input coordinates can be both
-image or WCS. The value must either be @option{img} or @option{wcs}, see
-@ref{Crop modes} for a full description.
+Operate in Image mode or WCS mode when the input coordinates can be both image 
or WCS.
+The value must either be @option{img} or @option{wcs}, see @ref{Crop modes} 
for a full description.
 
 @end table
 
@@ -12760,42 +9460,28 @@ on how many crops were requested, see @ref{Crop modes}:
 
 @itemize
 @item
-When a catalog is given, the value of the @option{--output} (see
-@ref{Common options}) will be read as the directory to store the output
-cropped images. Hence if it doesn't already exist, Crop will abort with an
-error of a ``No such file or directory'' error.
+When a catalog is given, the value of the @option{--output} (see @ref{Common 
options}) will be read as the directory to store the output cropped images.
+Hence if it doesn't already exist, Crop will abort with an error of a ``No 
such file or directory'' error.
 
-The crop file names will consist of two parts: a variable part (the row
-number of each target starting from 1) along with a fixed string which you
-can set with the @option{--suffix} option. Optionally, you may also use the
-@option{--namecol} option to define a column in the input catalog to use as
-the file name instead of numbers.
+The crop file names will consist of two parts: a variable part (the row number 
of each target starting from 1) along with a fixed string which you can set 
with the @option{--suffix} option.
+Optionally, you may also use the @option{--namecol} option to define a column 
in the input catalog to use as the file name instead of numbers.
 
 @item
-When only one crop is desired, the value to @option{--output} will be read
-as a file name. If no output is specified or if it is a directory, the
-output file name will follow the automatic output names of Gnuastro, see
-@ref{Automatic output}: The string given to @option{--suffix} will be
-replaced with the @file{.fits} suffix of the input.
+When only one crop is desired, the value to @option{--output} will be read as 
a file name.
+If no output is specified or if it is a directory, the output file name will 
follow the automatic output names of Gnuastro, see @ref{Automatic output}: The 
string given to @option{--suffix} will be replaced with the @file{.fits} suffix 
of the input.
 @end itemize
 
-The header of each output cropped image will contain the names of the input
-image(s) it was cut from. If a name is longer than the 70 character space
-that the FITS standard allows for header keyword values, the name will be
-cut into several keywords from the nearest slash (@key{/}). The keywords
-have the following format: @command{ICFn_m} (for Crop File). Where
-@command{n} is the number of the image used in this crop and @command{m} is
-the part of the name (it can be broken into multiple keywords). Following
-the name is another keyword named @command{ICFnPIX} which shows the pixel
-range from that input image in the same syntax as @ref{Crop section
-syntax}. So this string can be directly given to the @option{--section}
-option later.
-
-Once done, a log file can be created in the current directory with the
-@code{--log} option. This file will have three columns and the same number
-of rows as the number of cropped images. There are also comments on the top
-of the log file explaining basic information about the run and descriptions
-for the columns. A short description of the columns is also given below:
+The header of each output cropped image will contain the names of the input 
image(s) it was cut from.
+If a name is longer than the 70 character space that the FITS standard allows 
for header keyword values, the name will be cut into several keywords from the 
nearest slash (@key{/}).
+The keywords have the following format: @command{ICFn_m} (for Crop File).
+Where @command{n} is the number of the image used in this crop and @command{m} 
is the part of the name (it can be broken into multiple keywords).
+Following the name is another keyword named @command{ICFnPIX} which shows the 
pixel range from that input image in the same syntax as @ref{Crop section 
syntax}.
+So this string can be directly given to the @option{--section} option later.
+
+Once done, a log file can be created in the current directory with the 
@code{--log} option.
+This file will have three columns and the same number of rows as the number of 
cropped images.
+There are also comments on the top of the log file explaining basic 
information about the run and descriptions for the columns.
+A short description of the columns is also given below:
 
 @enumerate
 @item
@@ -12803,11 +9489,8 @@ The cropped image file name for that row.
 @item
 The number of input images that were used to create that image.
 @item
-A @code{0} if the central few pixels (value to the @option{--checkcenter}
-option) are blank and @code{1} if they aren't. When the crop was not
-defined by its center (see @ref{Crop modes}), or @option{--checkcenter} was
-given a value of 0 (see @ref{Invoking astcrop}), the center will not be
-checked and this column will be given a value of @code{-1}.
+A @code{0} if the central few pixels (value to the @option{--checkcenter} 
option) are blank and @code{1} if they aren't.
+When the crop was not defined by its center (see @ref{Crop modes}), or 
@option{--checkcenter} was given a value of 0 (see @ref{Invoking astcrop}), the 
center will not be checked and this column will be given a value of @code{-1}.
 @end enumerate
 
 
@@ -12832,18 +9515,12 @@ checked and this column will be given a value of 
@code{-1}.
 @node Arithmetic, Convolve, Crop, Data manipulation
 @section Arithmetic
 
-It is commonly necessary to do operations on some or all of the elements of
-a dataset independently (pixels in an image). For example, in the reduction
-of raw data it is necessary to subtract the Sky value (@ref{Sky value})
-from each image image. Later (once the images as warped into a single grid
-using Warp for example, see @ref{Warp}), the images are co-added (the
-output pixel grid is the average of the pixels of the individual input
-images). Arithmetic is Gnuastro's program for such operations on your
-datasets directly from the command-line. It currently uses the reverse
-polish or post-fix notation, see @ref{Reverse polish notation} and will work
-on the native data types of the input images/data to reduce CPU and RAM
-resources, see @ref{Numeric data types}. For more information on how to run
-Arithmetic, please see @ref{Invoking astarithmetic}.
+It is commonly necessary to do operations on some or all of the elements of a 
dataset independently (pixels in an image).
+For example, in the reduction of raw data it is necessary to subtract the Sky 
value (@ref{Sky value}) from each image image.
+Later (once the images as warped into a single grid using Warp for example, 
see @ref{Warp}), the images are co-added (the output pixel grid is the average 
of the pixels of the individual input images).
+Arithmetic is Gnuastro's program for such operations on your datasets directly 
from the command-line.
+It currently uses the reverse polish or post-fix notation, see @ref{Reverse 
polish notation} and will work on the native data types of the input 
images/data to reduce CPU and RAM resources, see @ref{Numeric data types}.
+For more information on how to run Arithmetic, please see @ref{Invoking 
astarithmetic}.
 
 
 @menu
@@ -12857,63 +9534,39 @@ Arithmetic, please see @ref{Invoking astarithmetic}.
 
 @cindex Post-fix notation
 @cindex Reverse Polish Notation
-The most common notation for arithmetic operations is the
-@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where
-the operator goes between the two operands, for example @mymath{4+5}. While
-the infix notation is the preferred way in most programming languages,
-currently the Gnuastro's program (in particular Arithmetic and Table, when
-doing column arithmetic) do not use it. This is because it will require
-parenthesis which can complicate the implementation of the code. In the
-near future we do plan to also allow this
-notation@footnote{@url{https://savannah.gnu.org/task/index.php?13867}}, but
-for the time being (due to time constraints on the developers), arithmetic
-operations can only be done in the post-fix notation (also known as
-@url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish
-notation}). The Wikipedia article provides some excellent explanation on
-this notation but here we will give a short summary here for
-self-sufficiency.
-
-In the post-fix notation, the operator is placed after the operands, as we
-will see below this removes the need to define parenthesis for most
-ordinary operators. For example, instead of writing @command{5+6}, we write
-@command{5 6 +}. To easily understand how this notation works, you can
-think of each operand as a node in a ``last-in-first-out'' stack. Every
-time an operator is confronted, the operator pops the number of operands it
-needs from the top of the stack (so they don't exist in the stack any
-more), does its operation and pushes the result back on top of the
-stack. So if you want the average of 5 and 6, you would write: @command{5 6
-+ 2 /}. The operations that are done are:
+The most common notation for arithmetic operations is the 
@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the 
operator goes between the two operands, for example @mymath{4+5}.
+While the infix notation is the preferred way in most programming languages, 
currently the Gnuastro's program (in particular Arithmetic and Table, when 
doing column arithmetic) do not use it.
+This is because it will require parenthesis which can complicate the 
implementation of the code.
+In the near future we do plan to also allow this 
notation@footnote{@url{https://savannah.gnu.org/task/index.php?13867}}, but for 
the time being (due to time constraints on the developers), arithmetic 
operations can only be done in the post-fix notation (also known as 
@url{https://en.wikipedia.org/wiki/Reverse_Polish_notation, reverse polish 
notation}).
+The Wikipedia article provides some excellent explanation on this notation but 
here we will give a short summary here for self-sufficiency.
+
+In the post-fix notation, the operator is placed after the operands, as we 
will see below this removes the need to define parenthesis for most ordinary 
operators.
+For example, instead of writing @command{5+6}, we write @command{5 6 +}.
+To easily understand how this notation works, you can think of each operand as 
a node in a ``last-in-first-out'' stack.
+Every time an operator is confronted, the operator pops the number of operands 
it needs from the top of the stack (so they don't exist in the stack any more), 
does its operation and pushes the result back on top of the stack.
+So if you want the average of 5 and 6, you would write: @command{5 6 + 2 /}.
+The operations that are done are:
 
 @enumerate
 @item
-@command{5} is an operand, so it is pushed to the top of the stack (which
-is initially empty).
+@command{5} is an operand, so it is pushed to the top of the stack (which is 
initially empty).
 @item
 @command{6} is an operand, so it is pushed to the top of the stack.
 @item
-@command{+} is a @emph{binary} operator, so it will pop the top two
-elements of the stack out of it, and perform addition on them (the order is
-@mymath{5+6} in the example above). The result is @command{11} which is
-pushed to the top of the stack.
+@command{+} is a @emph{binary} operator, so it will pop the top two elements 
of the stack out of it, and perform addition on them (the order is @mymath{5+6} 
in the example above).
+The result is @command{11} which is pushed to the top of the stack.
 @item
 @command{2} is an operand so push it onto the top of the stack.
 @item
-@command{/} is a binary operator, so pull out the top two elements of
-the stack (top-most is @command{2}, then @command{11}) and divide the
-second one by the first.
+@command{/} is a binary operator, so pull out the top two elements of the 
stack (top-most is @command{2}, then @command{11}) and divide the second one by 
the first.
 @end enumerate
 
-In the Arithmetic program, the operands can be FITS images or numbers (see
-@ref{Invoking astarithmetic}). In Table's column arithmetic, they can be
-any column or a number (see @ref{Column arithmetic}).
+In the Arithmetic program, the operands can be FITS images or numbers (see 
@ref{Invoking astarithmetic}).
+In Table's column arithmetic, they can be any column or a number (see 
@ref{Column arithmetic}).
 
-With this notation, very complicated procedures can be created without the
-need for parenthesis or worrying about precedence. Even functions which
-take an arbitrary number of arguments can be defined in this notation. This
-is a very powerful notation and is used in languages like Postscript
-@footnote{See the EPS and PDF part of @ref{Recognized file formats} for a
-little more on the Postscript language.} which produces PDF files when
-compiled.
+With this notation, very complicated procedures can be created without the 
need for parenthesis or worrying about precedence.
+Even functions which take an arbitrary number of arguments can be defined in 
this notation.
+This is a very powerful notation and is used in languages like Postscript 
@footnote{See the EPS and PDF part of @ref{Recognized file formats} for a 
little more on the Postscript language.} which produces PDF files when compiled.
 
 
 
@@ -12922,14 +9575,11 @@ compiled.
 @node Arithmetic operators, Invoking astarithmetic, Reverse polish notation, 
Arithmetic
 @subsection Arithmetic operators
 
-The recognized operators in Arithmetic are listed below. See @ref{Reverse
-polish notation} for more on how the operators and operands should be
-ordered on the command-line. The operands to all operators can be a data
-array (for example a FITS image) or a number, the output will be an array
-or number according to the inputs. For example a number multiplied by an
-array will produce an array. The conditional operators will return pixel,
-or numerical values of 0 (false) or 1 (true) and stored in an
-@code{unsigned char} data type (see @ref{Numeric data types}).
+The recognized operators in Arithmetic are listed below.
+See @ref{Reverse polish notation} for more on how the operators and operands 
should be ordered on the command-line.
+The operands to all operators can be a data array (for example a FITS image) 
or a number, the output will be an array or number according to the inputs.
+For example a number multiplied by an array will produce an array.
+The conditional operators will return pixel, or numerical values of 0 (false) 
or 1 (true) and stored in an @code{unsigned char} data type (see @ref{Numeric 
data types}).
 
 @table @command
 
@@ -12940,98 +9590,72 @@ Addition, so ``@command{4 5 +}'' is equivalent to 
@mymath{4+5}.
 Subtraction, so ``@command{4 5 -}'' is equivalent to @mymath{4-5}.
 
 @item x
-Multiplication, so ``@command{4 5 x}'' is equivalent to
-@mymath{4\times5}.
+Multiplication, so ``@command{4 5 x}'' is equivalent to @mymath{4\times5}.
 
 @item /
 Division, so ``@command{4 5 /}'' is equivalent to @mymath{4/5}.
 
 @item %
-Modulo (remainder), so ``@command{3 2 %}'' is equivalent to
-@mymath{1}. Note that the modulo operator only works on integer types.
+Modulo (remainder), so ``@command{3 2 %}'' is equivalent to @mymath{1}.
+Note that the modulo operator only works on integer types.
 
 @item abs
-Absolute value of first operand, so ``@command{4 abs}'' is
-equivalent to @mymath{|4|}.
+Absolute value of first operand, so ``@command{4 abs}'' is equivalent to 
@mymath{|4|}.
 
 @item pow
-First operand to the power of the second, so ``@command{4.3 5f pow}'' is
-equivalent to @mymath{4.3^{5}}. Currently @code{pow} will only work on
-single or double precision floating point numbers or images. To be sure
-that a number is read as a floating point (even if it doesn't have any
-non-zero decimals) put an @code{f} after it.
+First operand to the power of the second, so ``@command{4.3 5f pow}'' is 
equivalent to @mymath{4.3^{5}}.
+Currently @code{pow} will only work on single or double precision floating 
point numbers or images.
+To be sure that a number is read as a floating point (even if it doesn't have 
any non-zero decimals) put an @code{f} after it.
 
 @item sqrt
-The square root of the first operand, so ``@command{5 sqrt}'' is equivalent
-to @mymath{\sqrt{5}}. The output will have a floating point type, but its
-precision is determined from the input: if the input is a 64-bit floating
-point, the output will also be 64-bit. Otherwise, the output will be 32-bit
-floating point (see @ref{Numeric data types} for the respective
-precision). Therefore if you require 64-bit precision in estimating the
-square root, convert the input to 64-bit floating point first, for example
-with @code{5 float64 sqrt}.
+The square root of the first operand, so ``@command{5 sqrt}'' is equivalent to 
@mymath{\sqrt{5}}.
+The output will have a floating point type, but its precision is determined 
from the input: if the input is a 64-bit floating point, the output will also 
be 64-bit.
+Otherwise, the output will be 32-bit floating point (see @ref{Numeric data 
types} for the respective precision).
+Therefore if you require 64-bit precision in estimating the square root, 
convert the input to 64-bit floating point first, for example with @code{5 
float64 sqrt}.
 
 @item log
-Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to
-@mymath{ln(4)}. The output type is determined from the input, see the
-explanation under @command{sqrt} for more.
+Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to 
@mymath{ln(4)}.
+The output type is determined from the input, see the explanation under 
@command{sqrt} for more.
 
 @item log10
-Base-10 logarithm of first operand, so ``@command{4 log10}'' is equivalent
-to @mymath{\log(4)}. The output type is determined from the input, see the
-explanation under @command{sqrt} for more.
+Base-10 logarithm of first operand, so ``@command{4 log10}'' is equivalent to 
@mymath{\log(4)}.
+The output type is determined from the input, see the explanation under 
@command{sqrt} for more.
 
 @item minvalue
-Minimum (non-blank) value in the top operand on the stack, so
-``@command{a.fits minvalue}'' will push the minimum pixel value in this
-image onto the stack. Therefore this operator is mainly intended for data
-(for example images), if the top operand is a number, this operator just
-returns it without any change. So note that when this operator acts on a
-single image, the output will no longer be an image, but a number. The
-output of this operand is in the same type as the input.
+Minimum (non-blank) value in the top operand on the stack, so 
``@command{a.fits minvalue}'' will push the minimum pixel value in this image 
onto the stack.
+Therefore this operator is mainly intended for data (for example images), if 
the top operand is a number, this operator just returns it without any change.
+So note that when this operator acts on a single image, the output will no 
longer be an image, but a number.
+The output of this operand is in the same type as the input.
 
 @item maxvalue
-Maximum (non-blank) value of first operand in the same type, similar to
-@command{minvalue}.
+Maximum (non-blank) value of first operand in the same type, similar to 
@command{minvalue}.
 
 @item numbervalue
-Number of non-blank elements in first operand in the @code{uint64} type,
-similar to @command{minvalue}.
+Number of non-blank elements in first operand in the @code{uint64} type, 
similar to @command{minvalue}.
 
 @item sumvalue
-Sum of non-blank elements in first operand in the @code{float32} type,
-similar to @command{minvalue}.
+Sum of non-blank elements in first operand in the @code{float32} type, similar 
to @command{minvalue}.
 
 @item meanvalue
-Mean value of non-blank elements in first operand in the @code{float32}
-type, similar to @command{minvalue}.
+Mean value of non-blank elements in first operand in the @code{float32} type, 
similar to @command{minvalue}.
 
 @item stdvalue
-Standard deviation of non-blank elements in first operand in the
-@code{float32} type, similar to @command{minvalue}.
+Standard deviation of non-blank elements in first operand in the 
@code{float32} type, similar to @command{minvalue}.
 
 @item medianvalue
-Median of non-blank elements in first operand with the same type, similar
-to @command{minvalue}.
+Median of non-blank elements in first operand with the same type, similar to 
@command{minvalue}.
 
 @cindex NaN
 @item min
-For each pixel, find the minimum value in all given datasets. The output
-will have the same type as the input.
+For each pixel, find the minimum value in all given datasets.
+The output will have the same type as the input.
 
-The first popped operand to this operator must be a positive integer number
-which specifies how many further operands should be popped from the
-stack. All the subsequently popped operands must have the same type and
-size. This operator (and all the variable-operand operators similar to it
-that are discussed below) will work in multi-threaded mode unless
-Arithmetic is called with the @option{--numthreads=1} option, see
-@ref{Multi-threaded operations}.
+The first popped operand to this operator must be a positive integer number 
which specifies how many further operands should be popped from the stack.
+All the subsequently popped operands must have the same type and size.
+This operator (and all the variable-operand operators similar to it that are 
discussed below) will work in multi-threaded mode unless Arithmetic is called 
with the @option{--numthreads=1} option, see @ref{Multi-threaded operations}.
 
-Each pixel of the output of the @code{min} operator will be given the
-minimum value of the same pixel from all the popped operands/images. For
-example the following command will produce an image with the same size and
-type as the three inputs, but each output pixel value will be the minimum
-of the same pixel's values in all three input images.
+Each pixel of the output of the @code{min} operator will be given the minimum 
value of the same pixel from all the popped operands/images.
+For example the following command will produce an image with the same size and 
type as the three inputs, but each output pixel value will be the minimum of 
the same pixel's values in all three input images.
 
 @example
 $ astarithmetic a.fits b.fits c.fits 3 min
@@ -13044,200 +9668,146 @@ Important notes:
 NaN/blank pixels will be ignored, see @ref{Blank pixels}.
 
 @item
-The output will have the same type as the inputs. This is natural for the
-@command{min} and @command{max} operators, but for other similar operators
-(for example @command{sum}, or @command{average}) the per-pixel operations
-will be done in double precision floating point and then stored back in the
-input type. Therefore, if the input was an integer, C's internal type
-conversion will be used.
+The output will have the same type as the inputs.
+This is natural for the @command{min} and @command{max} operators, but for 
other similar operators (for example @command{sum}, or @command{average}) the 
per-pixel operations will be done in double precision floating point and then 
stored back in the input type.
+Therefore, if the input was an integer, C's internal type conversion will be 
used.
 
 @end itemize
 
 @item max
-For each pixel, find the maximum value in all given datasets. The output
-will have the same type as the input. This operator is called similar to
-the @command{min} operator, please see there for more.
+For each pixel, find the maximum value in all given datasets.
+The output will have the same type as the input.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item number
-For each pixel count the number of non-blank pixels in all given
-datasets. The output will be an unsigned 32-bit integer datatype (see
-@ref{Numeric data types}). This operator is called similar to the
-@command{min} operator, please see there for more.
+For each pixel count the number of non-blank pixels in all given datasets.
+The output will be an unsigned 32-bit integer datatype (see @ref{Numeric data 
types}).
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item sum
-For each pixel, calculate the sum in all given datasets. The output will
-have the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, calculate the sum in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item mean
-For each pixel, calculate the mean in all given datasets. The output will
-have the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, calculate the mean in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item std
-For each pixel, find the standard deviation in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{min} operator, please see there
-for more.
+For each pixel, find the standard deviation in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item median
-For each pixel, find the median in all given datasets. The output will have
-the a single-precision (32-bit) floating point type. This operator is
-called similar to the @command{min} operator, please see there for more.
+For each pixel, find the median in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{min} operator, please see 
there for more.
 
 @item sigclip-number
-For each pixel, find the sigma-clipped number (after removing outliers) in
-all given datasets. The output will have the an unsigned 32-bit integer
-type (see @ref{Numeric data types}).
-
-This operator will combine the specified number of inputs into a single
-output that contains the number of remaining elements after
-@mymath{\sigma}-clipping on each element/pixel (for more on
-@mymath{\sigma}-clipping, see @ref{Sigma clipping}). This operator is very
-similar to @command{min}, with the exception that it expects two operands
-(parameters for sigma-clipping) before the total number of inputs. The
-first popped operand is the termination criteria and the second is the
-multiple of @mymath{\sigma}.
-
-For example in the command below, the first popped operand (@command{0.2})
-is the sigma clipping termination criteria. If the termination criteria is
-larger than 1 it is interpreted as the number of clips to do. But if it is
-between 0 and 1, then it is the tolerance level on the standard deviation
-(see @ref{Sigma clipping}). The second popped operand (@command{5}) is the
-multiple of sigma to use in sigma-clipping. The third popped operand
-(@command{10}) is number of datasets that will be used (similar to the
-first popped operand to @command{min}).
+For each pixel, find the sigma-clipped number (after removing outliers) in all 
given datasets.
+The output will have the an unsigned 32-bit integer type (see @ref{Numeric 
data types}).
+
+This operator will combine the specified number of inputs into a single output 
that contains the number of remaining elements after @mymath{\sigma}-clipping 
on each element/pixel (for more on @mymath{\sigma}-clipping, see @ref{Sigma 
clipping}).
+This operator is very similar to @command{min}, with the exception that it 
expects two operands (parameters for sigma-clipping) before the total number of 
inputs.
+The first popped operand is the termination criteria and the second is the 
multiple of @mymath{\sigma}.
+
+For example in the command below, the first popped operand (@command{0.2}) is 
the sigma clipping termination criteria.
+If the termination criteria is larger than 1 it is interpreted as the number 
of clips to do.
+But if it is between 0 and 1, then it is the tolerance level on the standard 
deviation (see @ref{Sigma clipping}).
+The second popped operand (@command{5}) is the multiple of sigma to use in 
sigma-clipping.
+The third popped operand (@command{10}) is number of datasets that will be 
used (similar to the first popped operand to @command{min}).
 
 @example
 astarithmetic a.fits b.fits c.fits 3 5 0.2 sigclip-number
 @end example
 
 @item sigclip-median
-For each pixel, find the sigma-clipped median in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{sigclip-number} operator, please
-see there for more.
+For each pixel, find the sigma-clipped median in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item sigclip-mean
-For each pixel, find the sigma-clipped mean in all given datasets. The
-output will have the a single-precision (32-bit) floating point type. This
-operator is called similar to the @command{sigclip-number} operator, please
-see there for more.
+For each pixel, find the sigma-clipped mean in all given datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item sigclip-std
-For each pixel, find the sigma-clipped standard deviation in all given
-datasets. The output will have the a single-precision (32-bit) floating
-point type. This operator is called similar to the @command{sigclip-number}
-operator, please see there for more.
+For each pixel, find the sigma-clipped standard deviation in all given 
datasets.
+The output will have the a single-precision (32-bit) floating point type.
+This operator is called similar to the @command{sigclip-number} operator, 
please see there for more.
 
 @item filter-mean
-Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average,
-moving average}) on the input dataset. During mean filtering, each pixel
-(data element) is replaced by the mean value of all its surrounding pixels
-(excluding blank values). The number of surrounding pixels in each
-dimension (to calculate the mean) is determined through the earlier
-operands that have been pushed onto the stack prior to the input
-dataset. The number of necessary operands is determined by the dimensions
-of the input dataset (first popped operand). The order of the dimensions on
-the command-line is the order in FITS format. Here is one example:
+Apply mean filtering (or @url{https://en.wikipedia.org/wiki/Moving_average, 
moving average}) on the input dataset.
+During mean filtering, each pixel (data element) is replaced by the mean value 
of all its surrounding pixels (excluding blank values).
+The number of surrounding pixels in each dimension (to calculate the mean) is 
determined through the earlier operands that have been pushed onto the stack 
prior to the input dataset.
+The number of necessary operands is determined by the dimensions of the input 
dataset (first popped operand).
+The order of the dimensions on the command-line is the order in FITS format.
+Here is one example:
 
 @example
 $ astarithmetic 5 4 image.fits filter-mean
 @end example
 
 @noindent
-In this example, each pixel is replaced by the mean of a 5 by 4 box around
-it. The box is 5 pixels along the first FITS dimension (horizontal when
-viewed in ds9) and 4 pixels along the second FITS dimension (vertical).
+In this example, each pixel is replaced by the mean of a 5 by 4 box around it.
+The box is 5 pixels along the first FITS dimension (horizontal when viewed in 
ds9) and 4 pixels along the second FITS dimension (vertical).
 
-Each pixel will be placed in the center of the box that the mean is
-calculated on. If the given width along a dimension is even, then the
-center is assumed to be between the pixels (not in the center of a
-pixel). When the pixel is close to the edge, the pixels of the box that
-fall outside the image are ignored. Therefore, on the edge, less points
-will be used in calculating the mean.
+Each pixel will be placed in the center of the box that the mean is calculated 
on.
+If the given width along a dimension is even, then the center is assumed to be 
between the pixels (not in the center of a pixel).
+When the pixel is close to the edge, the pixels of the box that fall outside 
the image are ignored.
+Therefore, on the edge, less points will be used in calculating the mean.
 
-The final effect of mean filtering is to smooth the input image, it is
-essentially a convolution with a kernel that has identical values for all
-its pixels (is flat), see @ref{Convolution process}.
+The final effect of mean filtering is to smooth the input image, it is 
essentially a convolution with a kernel that has identical values for all its 
pixels (is flat), see @ref{Convolution process}.
 
-Note that blank pixels will also be affected by this operator: if there are
-any non-blank elements in the box surrounding a blank pixel, in the
-filtered image, it will have the mean of the non-blank elements, therefore
-it won't be blank any more. If blank elements are important for your
-analysis, you can use the @code{isblank} with the @code{where} operator to
-set them back to blank after filtering.
+Note that blank pixels will also be affected by this operator: if there are 
any non-blank elements in the box surrounding a blank pixel, in the filtered 
image, it will have the mean of the non-blank elements, therefore it won't be 
blank any more.
+If blank elements are important for your analysis, you can use the 
@code{isblank} with the @code{where} operator to set them back to blank after 
filtering.
 
 @item filter-median
-Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering}
-on the input dataset. This is very similar to @command{filter-mean}, except
-that instead of the mean value of the box pixels, the median value is used
-to replace a pixel value. For more on how to use this operator, please see
-@command{filter-mean}.
+Apply @url{https://en.wikipedia.org/wiki/Median_filter, median filtering} on 
the input dataset.
+This is very similar to @command{filter-mean}, except that instead of the mean 
value of the box pixels, the median value is used to replace a pixel value.
+For more on how to use this operator, please see @command{filter-mean}.
 
-The median is less susceptible to outliers compared to the mean. As a
-result, after median filtering, the pixel values will be more discontinuous
-than mean filtering.
+The median is less susceptible to outliers compared to the mean.
+As a result, after median filtering, the pixel values will be more 
discontinuous than mean filtering.
 
 @item filter-sigclip-mean
-Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset. This
-is very similar to @code{filter-mean}, except that all outliers (identified
-by the @mymath{\sigma}-clipping algorithm) have been removed, see
-@ref{Sigma clipping} for more on the basics of this algorithm. As described
-there, two extra input parameters are necessary for
-@mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the
-termination criteria. @code{filter-sigclip-mean} therefore needs to pop two
-other operands from the stack after the dimensions of the box.
-
-For example the line below uses the same box size as the example of
-@code{filter-mean}. However, all elements in the box that are iteratively
-beyond @mymath{3\sigma} of the distribution's median are removed from the
-final calculation of the mean until the change in @mymath{\sigma} is less
-than @mymath{0.2}.
+Apply a @mymath{\sigma}-clipped mean filtering onto the input dataset.
+This is very similar to @code{filter-mean}, except that all outliers 
(identified by the @mymath{\sigma}-clipping algorithm) have been removed, see 
@ref{Sigma clipping} for more on the basics of this algorithm.
+As described there, two extra input parameters are necessary for 
@mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the termination 
criteria.
+@code{filter-sigclip-mean} therefore needs to pop two other operands from the 
stack after the dimensions of the box.
+
+For example the line below uses the same box size as the example of 
@code{filter-mean}.
+However, all elements in the box that are iteratively beyond @mymath{3\sigma} 
of the distribution's median are removed from the final calculation of the mean 
until the change in @mymath{\sigma} is less than @mymath{0.2}.
 
 @example
 $ astarithmetic 3 0.2 5 4 image.fits filter-sigclip-mean
 @end example
 
-The median (which needs a sorted dataset) is necessary for
-@mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be
-significantly slower than @code{filter-mean}. However, if there are strong
-outliers in the dataset that you want to ignore (for example emission lines
-on a spectrum when finding the continuum), this is a much better solution.
+The median (which needs a sorted dataset) is necessary for 
@mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be 
significantly slower than @code{filter-mean}.
+However, if there are strong outliers in the dataset that you want to ignore 
(for example emission lines on a spectrum when finding the continuum), this is 
a much better solution.
 
 @item filter-sigclip-median
-Apply a @mymath{\sigma}-clipped median filtering onto the input
-dataset. This operator and its necessary operands are almost identical to
-@code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the
-median value (which is less affected by outliers than the mean) is added
-back to the stack.
+Apply a @mymath{\sigma}-clipped median filtering onto the input dataset.
+This operator and its necessary operands are almost identical to 
@code{filter-sigclip-mean}, except that after @mymath{\sigma}-clipping, the 
median value (which is less affected by outliers than the mean) is added back 
to the stack.
 
 @item interpolate-medianngb
-Interpolate all the blank elements of the second popped operand with the
-median of its nearest non-blank neighbors. The number of the nearest
-non-blank neighbors used to calculate the median is given by the first
-popped operand. Note that the distance of the nearest non-blank neighbors
-is irrelevant in this interpolation.
+Interpolate all the blank elements of the second popped operand with the 
median of its nearest non-blank neighbors.
+The number of the nearest non-blank neighbors used to calculate the median is 
given by the first popped operand.
+Note that the distance of the nearest non-blank neighbors is irrelevant in 
this interpolation.
 
 @item collapse-sum
-Collapse the given dataset (second popped operand), by summing all elements
-along the first popped operand (a dimension in FITS standard: counting from
-one, from fastest dimension). The returned dataset has one dimension less
-compared to the input.
-
-The output will have a double-precision floating point type irrespective of
-the input dataset's type. Doing the operation in double-precision (64-bit)
-floating point will help the collapse (summation) be affected less by
-floating point errors. But afterwards, single-precision floating points are
-usually enough in real (noisy) datasets. So depending on the type of the
-input and its nature, it is recommended to use one of the type conversion
-operators on the returned dataset.
+Collapse the given dataset (second popped operand), by summing all elements 
along the first popped operand (a dimension in FITS standard: counting from 
one, from fastest dimension).
+The returned dataset has one dimension less compared to the input.
+
+The output will have a double-precision floating point type irrespective of 
the input dataset's type.
+Doing the operation in double-precision (64-bit) floating point will help the 
collapse (summation) be affected less by floating point errors.
+But afterwards, single-precision floating points are usually enough in real 
(noisy) datasets.
+So depending on the type of the input and its nature, it is recommended to use 
one of the type conversion operators on the returned dataset.
 
 @cindex World Coordinate System (WCS)
-If any WCS is present, the returned dataset will also lack the respective
-dimension in its WCS matrix. Therefore, when the WCS is important for later
-processing, be sure that the input is aligned with the respective axes:
-all non-diagonal elements in the WCS matrix are zero.
+If any WCS is present, the returned dataset will also lack the respective 
dimension in its WCS matrix.
+Therefore, when the WCS is important for later processing, be sure that the 
input is aligned with the respective axes: all non-diagonal elements in the WCS 
matrix are zero.
 
 @cindex IFU
 @cindex Data cubes
@@ -13245,438 +9815,322 @@ all non-diagonal elements in the WCS matrix are zero.
 @cindex Cubes (3D data)
 @cindex Narrow-band image
 @cindex Integral field unit (IFU)
-One common application of this operator is the creation of pseudo
-broad-band or narrow-band 2D images from 3D data cubes. For example
-integral field unit (IFU) data products that have two spatial dimensions
-(first two FITS dimensions) and one spectral dimension (third FITS
-dimension). The command below will collapse the whole third dimension into
-a 2D array the size of the first two dimensions, and then convert the
-output to single-precision floating point (as discussed above).
+One common application of this operator is the creation of pseudo broad-band 
or narrow-band 2D images from 3D data cubes.
+For example integral field unit (IFU) data products that have two spatial 
dimensions (first two FITS dimensions) and one spectral dimension (third FITS 
dimension).
+The command below will collapse the whole third dimension into a 2D array the 
size of the first two dimensions, and then convert the output to 
single-precision floating point (as discussed above).
 
 @example
 $ astarithmetic cube.fits 3 collapse-sum float32
 @end example
 
 @item collapse-mean
-Similar to @option{collapse-sum}, but the returned dataset will be the mean
-value along the collapsed dimension, not the sum.
+Similar to @option{collapse-sum}, but the returned dataset will be the mean 
value along the collapsed dimension, not the sum.
 
 @item collapse-number
-Similar to @option{collapse-sum}, but the returned dataset will be the
-number of non-blank values along the collapsed dimension. The output will
-have a 32-bit signed integer type. If the input dataset doesn't have blank
-values, all the elements in the returned dataset will have a single value
-(the length of the collapsed dimension). Therefore this is mostly relevant
-when there are blank values in the dataset.
+Similar to @option{collapse-sum}, but the returned dataset will be the number 
of non-blank values along the collapsed dimension.
+The output will have a 32-bit signed integer type.
+If the input dataset doesn't have blank values, all the elements in the 
returned dataset will have a single value (the length of the collapsed 
dimension).
+Therefore this is mostly relevant when there are blank values in the dataset.
 
 @item collapse-min
-Similar to @option{collapse-sum}, but the returned dataset will have the
-same numeric type as the input and will contain the minimum value for each
-pixel along the collapsed dimension.
+Similar to @option{collapse-sum}, but the returned dataset will have the same 
numeric type as the input and will contain the minimum value for each pixel 
along the collapsed dimension.
 
 @item collapse-max
-Similar to @option{collapse-sum}, but the returned dataset will have the
-same numeric type as the input and will contain the maximum value for each
-pixel along the collapsed dimension.
+Similar to @option{collapse-sum}, but the returned dataset will have the same 
numeric type as the input and will contain the maximum value for each pixel 
along the collapsed dimension.
 
 @item unique
-Remove all duplicate (and blank) elements from the first popped
-operand. The unique elements of the dataset will be stored in a
-single-dimensional dataset.
+Remove all duplicate (and blank) elements from the first popped operand.
+The unique elements of the dataset will be stored in a single-dimensional 
dataset.
 
-Recall that by default, single-dimensional datasets are stored as a table
-column in the output. But you can use @option{--onedasimage} or
-@option{--onedonstdout} to respectively store them as a single-dimensional
-FITS array/image, or to print them on the standard output.
+Recall that by default, single-dimensional datasets are stored as a table 
column in the output.
+But you can use @option{--onedasimage} or @option{--onedonstdout} to 
respectively store them as a single-dimensional FITS array/image, or to print 
them on the standard output.
 
 @item erode
 @cindex Erosion
-Erode the foreground pixels (with value @code{1}) of the input dataset
-(second popped operand). The first popped operand is the connectivity (see
-description in @command{connected-components}). Erosion is simply a
-flipping of all foreground pixels (to background; with value @code{0}) that
-are ``touching'' background pixels. ``Touching'' is defined by the
-connectivity. In effect, this carves off the outer borders of the
-foreground, making them thinner. This operator assumes a binary dataset
-(all pixels are @code{0} and @code{1}).
+Erode the foreground pixels (with value @code{1}) of the input dataset (second 
popped operand).
+The first popped operand is the connectivity (see description in 
@command{connected-components}).
+Erosion is simply a flipping of all foreground pixels (to background; with 
value @code{0}) that are ``touching'' background pixels.
+``Touching'' is defined by the connectivity.
+In effect, this carves off the outer borders of the foreground, making them 
thinner.
+This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
 
 @item dilate
 @cindex Dilation
-Dilate the foreground pixels (with value @code{1}) of the input dataset
-(second popped operand). The first popped operand is the connectivity (see
-description in @command{connected-components}). Erosion is simply a
-flipping of all background pixels (with value @code{0}) to foreground that
-are ``touching'' foreground pixels. ``Touching'' is defined by the
-connectivity. In effect, this expands the outer borders of the
-foreground. This operator assumes a binary dataset (all pixels are @code{0}
-and @code{1}).
+Dilate the foreground pixels (with value @code{1}) of the input dataset 
(second popped operand).
+The first popped operand is the connectivity (see description in 
@command{connected-components}).
+Erosion is simply a flipping of all background pixels (with value @code{0}) to 
foreground that are ``touching'' foreground pixels.
+``Touching'' is defined by the connectivity.
+In effect, this expands the outer borders of the foreground.
+This operator assumes a binary dataset (all pixels are @code{0} and @code{1}).
 
 @item connected-components
 @cindex Connected components
-Find the connected components in the input dataset (second popped
-operand). The first popped is the connectivity used in the connected
-components algorithm. The second popped operand is the dataset where
-connected components are to be found. It is assumed to be a binary image
-(with values of 0 or 1). It must have an 8-bit unsigned integer type which
-is the format produced by conditional operators. This operator will return
-a labeled dataset where the non-zero pixels in the input will be labeled
-with a counter (starting from 1).
-
-The connectivity is a number between 1 and the number of dimensions in the
-dataset (inclusive). 1 corresponds to the weakest (symmetric) connectivity
-between elements and the number of dimensions the strongest. For example on
-a 2D image, a connectivity of 1 corresponds to 4-connected neighbors and 2
-corresponds to 8-connected neighbors.
-
-One example usage of this operator can be the identification of regions
-above a certain threshold, as in the command below. With this command,
-Arithmetic will first separate all pixels greater than 100 into a binary
-image (where pixels with a value of 1 are above that value). Afterwards, it
-will label all those that are connected.
+Find the connected components in the input dataset (second popped operand).
+The first popped is the connectivity used in the connected components 
algorithm.
+The second popped operand is the dataset where connected components are to be 
found.
+It is assumed to be a binary image (with values of 0 or 1).
+It must have an 8-bit unsigned integer type which is the format produced by 
conditional operators.
+This operator will return a labeled dataset where the non-zero pixels in the 
input will be labeled with a counter (starting from 1).
+
+The connectivity is a number between 1 and the number of dimensions in the 
dataset (inclusive).
+1 corresponds to the weakest (symmetric) connectivity between elements and the 
number of dimensions the strongest.
+For example on a 2D image, a connectivity of 1 corresponds to 4-connected 
neighbors and 2 corresponds to 8-connected neighbors.
+
+One example usage of this operator can be the identification of regions above 
a certain threshold, as in the command below.
+With this command, Arithmetic will first separate all pixels greater than 100 
into a binary image (where pixels with a value of 1 are above that value).
+Afterwards, it will label all those that are connected.
 
 @example
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-If your input dataset doesn't have a binary type, but you know all its
-values are 0 or 1, you can use the @code{uint8} operator (below) to convert
-it to binary.
+If your input dataset doesn't have a binary type, but you know all its values 
are 0 or 1, you can use the @code{uint8} operator (below) to convert it to 
binary.
 
 @item fill-holes
-Flip background (0) pixels surrounded by foreground (1) in a binary
-dataset. This operator takes two operands (similar to
-@code{connected-components}): the first popped operand is the connectivity
-(to define a hole) and the second is the binary (0 or 1 valued) dataset to
-fill holes in.
+Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
+This operator takes two operands (similar to @code{connected-components}): the 
first popped operand is the connectivity (to define a hole) and the second is 
the binary (0 or 1 valued) dataset to fill holes in.
 
 @item invert
-Invert an unsigned integer dataset. This is the only operator that ignores
-blank values (which are set to be the maximum values in the unsigned
-integer types).
-
-This is useful in cases where the target(s) has(have) been imaged in
-absorption as raw formats (which are unsigned integer types). With this
-option, the maximum value for the given type will be subtracted from each
-pixel value, thus ``inverting'' the image, so the target(s) can be treated
-as emission. This can be useful when the higher-level analysis
-methods/tools only work on emission (positive skew in the noise, not
-negative).
+Invert an unsigned integer dataset.
+This is the only operator that ignores blank values (which are set to be the 
maximum values in the unsigned integer types).
+
+This is useful in cases where the target(s) has(have) been imaged in 
absorption as raw formats (which are unsigned integer types).
+With this option, the maximum value for the given type will be subtracted from 
each pixel value, thus ``inverting'' the image, so the target(s) can be treated 
as emission.
+This can be useful when the higher-level analysis methods/tools only work on 
emission (positive skew in the noise, not negative).
 
 @item lt
-Less than: If the second popped (or left operand in infix notation, see
-@ref{Reverse polish notation}) value is smaller than the first popped
-operand, then this function will return a value of 1, otherwise it will
-return a value of 0. If both operands are images, then all the pixels will
-be compared with their counterparts in the other image. If only one operand
-is an image, then all the pixels will be compared with the single value
-(number) of the other operand. Finally if both are numbers, then the output
-is also just one number (0 or 1). When the output is not a single number,
-it will be stored as an @code{unsigned char} type.
+Less than: If the second popped (or left operand in infix notation, see 
@ref{Reverse polish notation}) value is smaller than the first popped operand, 
then this function will return a value of 1, otherwise it will return a value 
of 0.
+If both operands are images, then all the pixels will be compared with their 
counterparts in the other image.
+If only one operand is an image, then all the pixels will be compared with the 
single value (number) of the other operand.
+Finally if both are numbers, then the output is also just one number (0 or 1).
+When the output is not a single number, it will be stored as an @code{unsigned 
char} type.
 
 @item le
-Less or equal: similar to @code{lt} (`less than' operator), but returning 1
-when the second popped operand is smaller or equal to the first.
+Less or equal: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is smaller or equal to the first.
 
 @item gt
-Greater than: similar to @code{lt} (`less than' operator), but
-returning 1 when the second popped operand is greater than the first.
+Greater than: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is greater than the first.
 
 @item ge
-Greater or equal: similar to @code{lt} (`less than' operator), but
-returning 1 when the second popped operand is larger or equal to the first.
+Greater or equal: similar to @code{lt} (`less than' operator), but returning 1 
when the second popped operand is larger or equal to the first.
 
 @item eq
-Equality: similar to @code{lt} (`less than' operator), but returning 1 when
-the two popped operands are equal (to double precision floating point
+Equality: similar to @code{lt} (`less than' operator), but returning 1 when 
the two popped operands are equal (to double precision floating point
 accuracy).
 
 @item ne
-Non-Equality: similar to @code{lt} (`less than' operator), but returning 1
-when the two popped operands are @emph{not} equal (to double precision
-floating point accuracy).
+Non-Equality: similar to @code{lt} (`less than' operator), but returning 1 
when the two popped operands are @emph{not} equal (to double precision floating 
point accuracy).
 
 @item and
-Logical AND: returns 1 if both operands have a non-zero value and 0 if both
-are zero. Both operands have to be the same kind: either both images or
-both numbers.
+Logical AND: returns 1 if both operands have a non-zero value and 0 if both 
are zero.
+Both operands have to be the same kind: either both images or both numbers.
 
 @item or
-Logical OR: returns 1 if either one of the operands is non-zero and 0 only
-when both operators are zero. Both operands have to be the same kind:
-either both images or both numbers.
+Logical OR: returns 1 if either one of the operands is non-zero and 0 only 
when both operators are zero.
+Both operands have to be the same kind: either both images or both numbers.
 
 @item not
-Logical NOT: returns 1 when the operand is zero and 0 when the operand is
-non-zero. The operand can be an image or number, for an image, it is
-applied to each pixel separately.
+Logical NOT: returns 1 when the operand is zero and 0 when the operand is 
non-zero.
+The operand can be an image or number, for an image, it is applied to each 
pixel separately.
 
 @cindex Blank pixel
 @item isblank
-Test for a blank value (see @ref{Blank pixels}). In essence, this is very
-similar to the conditional operators: the output is either 1 or 0 (see the
-`less than' operator above). The difference is that it only needs one
-operand. Because of the definition of a blank pixel, a blank value is not
-even equal to itself, so you cannot use the equal operator above to select
-blank pixels. See the ``Blank pixels'' box below for more on Blank pixels
-in Arithmetic.
+Test for a blank value (see @ref{Blank pixels}).
+In essence, this is very similar to the conditional operators: the output is 
either 1 or 0 (see the `less than' operator above).
+The difference is that it only needs one operand.
+Because of the definition of a blank pixel, a blank value is not even equal to 
itself, so you cannot use the equal operator above to select blank pixels.
+See the ``Blank pixels'' box below for more on Blank pixels in Arithmetic.
 
 @item where
-Change the input (pixel) value @emph{where}/if a certain condition
-holds. The conditional operators above can be used to define the
-condition. Three operands are required for @command{where}. The input
-format is demonstrated in this simplified example:
+Change the input (pixel) value @emph{where}/if a certain condition holds.
+The conditional operators above can be used to define the condition.
+Three operands are required for @command{where}.
+The input format is demonstrated in this simplified example:
 
 @example
 $ astarithmetic modify.fits binary.fits if-true.fits where
 @end example
 
-The value of any pixel in @file{modify.fits} that corresponds to a non-zero
-@emph{and} non-blank pixel of @file{binary.fits} will be changed to the
-value of the same pixel in @file{if-true.fits} (this may also be a
-number). The 3rd and 2nd popped operands (@file{modify.fits} and
-@file{binary.fits} respectively, see @ref{Reverse polish notation}) have to
-have the same dimensions/size. @file{if-true.fits} can be either a number,
-or have the same dimension/size as the other two.
+The value of any pixel in @file{modify.fits} that corresponds to a non-zero 
@emph{and} non-blank pixel of @file{binary.fits} will be changed to the value 
of the same pixel in @file{if-true.fits} (this may also be a number).
+The 3rd and 2nd popped operands (@file{modify.fits} and @file{binary.fits} 
respectively, see @ref{Reverse polish notation}) have to have the same 
dimensions/size.
+@file{if-true.fits} can be either a number, or have the same dimension/size as 
the other two.
 
-The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or
-@code{unsigned char} in standard C) type (see @ref{Numeric data types}). It
-is treated as a binary dataset (with only two values: zero and non-zero,
-hence the name @code{binary.fits} in this example).  However, commonly you
-won't be dealing with an actual FITS file of a condition/binary image. You
-will probably define the condition in the same run based on some other
-reference image and use the conditional and logical operators above to make
-a true/false (or one/zero) image for you internally. For example the case
-below:
+The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or 
@code{unsigned char} in standard C) type (see @ref{Numeric data types}).
+It is treated as a binary dataset (with only two values: zero and non-zero, 
hence the name @code{binary.fits} in this example).
+However, commonly you won't be dealing with an actual FITS file of a 
condition/binary image.
+You will probably define the condition in the same run based on some other 
reference image and use the conditional and logical operators above to make a 
true/false (or one/zero) image for you internally.
+For example the case below:
 
 @example
 $ astarithmetic in.fits reference.fits 100 gt new.fits where
 @end example
 
-In the example above, any of the @file{in.fits} pixels that has a value in
-@file{reference.fits} greater than @command{100}, will be replaced with the
-corresponding pixel in @file{new.fits}. Effectively the
-@code{reference.fits 100 gt} part created the condition/binary image which
-was added to the stack (in memory) and later used by @code{where}. The
-command above is thus equivalent to these two commands:
+In the example above, any of the @file{in.fits} pixels that has a value in 
@file{reference.fits} greater than @command{100}, will be replaced with the 
corresponding pixel in @file{new.fits}.
+Effectively the @code{reference.fits 100 gt} part created the condition/binary 
image which was added to the stack (in memory) and later used by @code{where}.
+The command above is thus equivalent to these two commands:
 
 @example
 $ astarithmetic reference.fits 100 gt --output=binary.fits
 $ astarithmetic in.fits binary.fits new.fits where
 @end example
 
-Finally, the input operands are read and used independently, so you can use
-the same file more than once as any of the operands.
+Finally, the input operands are read and used independently, so you can use 
the same file more than once as any of the operands.
 
-When the 1st popped operand to @code{where} (@file{if-true.fits}) is a
-single number, it may be a NaN value (or any blank value, depending on its
-type) like the example below (see @ref{Blank pixels}). When the number is
-blank, it will be converted to the blank value of the type of the 3rd
-popped operand (@code{in.fits}). Hence, in the example below, all the
-pixels in @file{reference.fits} that have a value greater than 100, will
-become blank in the natural data type of @file{in.fits} (even though NaN
-values are only defined for floating point types).
+When the 1st popped operand to @code{where} (@file{if-true.fits}) is a single 
number, it may be a NaN value (or any blank value, depending on its type) like 
the example below (see @ref{Blank pixels}).
+When the number is blank, it will be converted to the blank value of the type 
of the 3rd popped operand (@code{in.fits}).
+Hence, in the example below, all the pixels in @file{reference.fits} that have 
a value greater than 100, will become blank in the natural data type of 
@file{in.fits} (even though NaN values are only defined for floating point 
types).
 
 @example
 $ astarithmetic in.fits reference.fits 100 gt nan where
 @end example
 
 @item bitand
-Bitwise AND operator: only bits with values of 1 in both popped operands
-will get the value of 1, the rest will be set to 0. For example (assuming
-numbers can be written as bit strings on the command-line): @code{00101000
-00100010 bitand} will give @code{00100000}. Note that the bitwise operators
-only work on integer type datasets.
+Bitwise AND operator: only bits with values of 1 in both popped operands will 
get the value of 1, the rest will be set to 0.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitor
-Bitwise inclusive OR operator: The bits where at least one of the two popped
-operands has a 1 value get a value of 1, the others 0. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 00100010 bitand} will give @code{00101010}. Note that the
-bitwise operators only work on integer type datasets.
+Bitwise inclusive OR operator: The bits where at least one of the two popped 
operands has a 1 value get a value of 1, the others 0.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitxor
-Bitwise exclusive OR operator: A bit will be 1 if it differs between the
-two popped operands. For example (assuming numbers can be written as bit
-strings on the command-line): @code{00101000 00100010 bitand} will give
-@code{00001010}. Note that the bitwise operators only work on integer type
-datasets.
+Bitwise exclusive OR operator: A bit will be 1 if it differs between the two 
popped operands.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item lshift
-Bitwise left shift operator: shift all the bits of the first operand to the
-left by a number of times given by the second operand. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 2 lshift} will give @code{10100000}. This is equivalent to
-multiplication by 4.  Note that the bitwise operators only work on integer
-type datasets.
+Bitwise left shift operator: shift all the bits of the first operand to the 
left by a number of times given by the second operand.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 lshift} will give @code{10100000}.
+This is equivalent to multiplication by 4.
+Note that the bitwise operators only work on integer type datasets.
 
 @item rshift
-Bitwise right shift operator: shift all the bits of the first operand to
-the right by a number of times given by the second operand. For example
-(assuming numbers can be written as bit strings on the command-line):
-@code{00101000 2 rshift} will give @code{00001010}. Note that the bitwise
-operators only work on integer type datasets.
+Bitwise right shift operator: shift all the bits of the first operand to the 
right by a number of times given by the second operand.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 rshift} will give @code{00001010}.
+Note that the bitwise operators only work on integer type datasets.
 
 @item bitnot
-Bitwise not (more formally known as one's complement) operator: flip all
-the bits of the popped operand (note that this is the only unary, or single
-operand, bitwise operator). In other words, any bit with a value of
-@code{0} is changed to @code{1} and vice-versa. For example (assuming
-numbers can be written as bit strings on the command-line): @code{00101000
-bitnot} will give @code{11010111}. Note that the bitwise operators only
-work on integer type datasets/numbers.
+Bitwise not (more formally known as one's complement) operator: flip all the 
bits of the popped operand (note that this is the only unary, or single 
operand, bitwise operator).
+In other words, any bit with a value of @code{0} is changed to @code{1} and 
vice-versa.
+For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 bitnot} will give @code{11010111}.
+Note that the bitwise operators only work on integer type datasets/numbers.
 
 @item uint8
-Convert the type of the popped operand to 8-bit unsigned integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 8-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int8
-Convert the type of the popped operand to 8-bit signed integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 8-bit signed integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint16
-Convert the type of the popped operand to 16-bit unsigned integer type
-(see @ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 16-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int16
-Convert the type of the popped operand to 16-bit signed integer (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 16-bit signed integer (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint32
-Convert the type of the popped operand to 32-bit unsigned integer type
-(see @ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 32-bit unsigned integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item int32
-Convert the type of the popped operand to 32-bit signed integer type (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 32-bit signed integer type (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item uint64
-Convert the type of the popped operand to 64-bit unsigned integer (see
-@ref{Numeric data types}). The internal conversion of C will be used.
+Convert the type of the popped operand to 64-bit unsigned integer (see 
@ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item float32
-Convert the type of the popped operand to 32-bit (single precision)
-floating point (see @ref{Numeric data types}). The internal conversion of C
-will be used.
+Convert the type of the popped operand to 32-bit (single precision) floating 
point (see @ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item float64
-Convert the type of the popped operand to 64-bit (double precision)
-floating point (see @ref{Numeric data types}). The internal conversion of C
-will be used.
+Convert the type of the popped operand to 64-bit (double precision) floating 
point (see @ref{Numeric data types}).
+The internal conversion of C will be used.
 
 @item set-AAA
-Set the characters after the dash (@code{AAA} in the case shown here) as a
-name for the first popped operand on the stack. The named dataset will be
-freed from memory as soon as it is no longer needed, or if the name is
-reset to refer to another dataset later in the command. This operator thus
-enables re-usability of a dataset without having to re-read it from a file
-every time it is necessary during a process. When a dataset is necessary
-more than once, this operator can thus help simplify reading/writing on the
-command-line (thus avoiding potential bugs), while also speeding up the
-processing.
+Set the characters after the dash (@code{AAA} in the case shown here) as a 
name for the first popped operand on the stack.
+The named dataset will be freed from memory as soon as it is no longer needed, 
or if the name is reset to refer to another dataset later in the command.
+This operator thus enables re-usability of a dataset without having to re-read 
it from a file every time it is necessary during a process.
+When a dataset is necessary more than once, this operator can thus help 
simplify reading/writing on the command-line (thus avoiding potential bugs), 
while also speeding up the processing.
 
-Like all operators, this operator pops the top operand off of the main
-processing stack, but unlike other operands, it won't add anything back to
-the stack immediately. It will keep the popped dataset in memory through a
-separate list of named datasets (not on the main stack). That list will be
-used to add/copy any requested dataset to the main processing stack when
-the name is called.
+Like all operators, this operator pops the top operand off of the main 
processing stack, but unlike other operands, it won't add anything back to the 
stack immediately.
+It will keep the popped dataset in memory through a separate list of named 
datasets (not on the main stack).
+That list will be used to add/copy any requested dataset to the main 
processing stack when the name is called.
 
-The name to give the popped dataset is part of the operator's name. For
-example the @code{set-a} operator of the command below, gives the name
-``@code{a}'' to the contents of @file{image.fits}. This name is then used
-instead of the actual filename to multiply the dataset by two.
+The name to give the popped dataset is part of the operator's name.
+For example the @code{set-a} operator of the command below, gives the name 
``@code{a}'' to the contents of @file{image.fits}.
+This name is then used instead of the actual filename to multiply the dataset 
by two.
 
 @example
 $ astarithmetic image.fits set-a a 2 x
 @end example
 
-The name can be any string, but avoid strings ending with standard filename
-suffixes (for example @file{.fits})@footnote{A dataset name like
-@file{a.fits} (which can be set with @command{set-a.fits}) will cause
-confusion in the initial parser of Arithmetic. It will assume this name
-is a FITS file, and if it is used multiple times, Arithmetic will abort,
-complaining that you haven't provided enough HDUs.}.
+The name can be any string, but avoid strings ending with standard filename 
suffixes (for example @file{.fits})@footnote{A dataset name like @file{a.fits} 
(which can be set with @command{set-a.fits}) will cause confusion in the 
initial parser of Arithmetic.
+It will assume this name is a FITS file, and if it is used multiple times, 
Arithmetic will abort, complaining that you haven't provided enough HDUs.}.
 
-One example of the usefulness of this operator is in the @code{where}
-operator. For example, let's assume you want to mask all pixels larger than
-@code{5} in @file{image.fits} (extension number 1) with a NaN
-value. Without setting a name for the dataset, you have to read the file
-two times from memory in a command like this:
+One example of the usefulness of this operator is in the @code{where} operator.
+For example, let's assume you want to mask all pixels larger than @code{5} in 
@file{image.fits} (extension number 1) with a NaN value.
+Without setting a name for the dataset, you have to read the file two times 
from memory in a command like this:
 
 @example
 $ astarithmetic image.fits image.fits 5 gt nan where -g1
 @end example
 
-But with this operator you can simply give @file{image.fits} the name
-@code{i} and simplify the command above to the more readable one below
-(which greatly helps when the filename is long):
+But with this operator you can simply give @file{image.fits} the name @code{i} 
and simplify the command above to the more readable one below (which greatly 
helps when the filename is long):
 
 @example
 $ astarithmetic image.fits set-i   i i 5 gt nan where
 @end example
 
 @item tofile-AAA
-Write the top operand on the operands stack into a file called @code{AAA}
-(can be any FITS file name) without changing the operands stack. If you
-don't need the dataset any more and would like to free it, see the
-@code{tofilefree} operator below.
-
-By default, any file that is given to this operator is deleted before
-Arithmetic actually starts working on the input datasets. The deletion can
-be deactivated with the @option{--dontdelete} option (as in all Gnuastro
-programs, see @ref{Input output options}). If the same FITS file is given
-to this operator multiple times, it will contain multiple extensions (in
-the same order that it was called.
-
-For example the operator @command{tofile-check.fits} will write the top
-operand to @file{check.fits}. Since it doesn't modify the operands stack,
-this operator is very convenient when you want to debug, or understanding,
-a string of operators and operands given to Arithmetic: simply put
-@command{tofile-AAA} anywhere in the process to see what is happening
-behind the scenes without modifying the overall process.
+Write the top operand on the operands stack into a file called @code{AAA} (can 
be any FITS file name) without changing the operands stack.
+If you don't need the dataset any more and would like to free it, see the 
@code{tofilefree} operator below.
+
+By default, any file that is given to this operator is deleted before 
Arithmetic actually starts working on the input datasets.
+The deletion can be deactivated with the @option{--dontdelete} option (as in 
all Gnuastro programs, see @ref{Input output options}).
+If the same FITS file is given to this operator multiple times, it will 
contain multiple extensions (in the same order that it was called.
+
+For example the operator @command{tofile-check.fits} will write the top 
operand to @file{check.fits}.
+Since it doesn't modify the operands stack, this operator is very convenient 
when you want to debug, or understanding, a string of operators and operands 
given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to 
see what is happening behind the scenes without modifying the overall process.
 
 @item tofilefree-AAA
-Similar to the @code{tofile} operator, with the only difference that the
-dataset that is written to a file is popped from the operand stack and
-freed from memory (cannot be used any more).
+Similar to the @code{tofile} operator, with the only difference that the 
dataset that is written to a file is popped from the operand stack and freed 
from memory (cannot be used any more).
 
 @end table
 
 @cartouche
 @noindent
-@strong{Blank pixels in Arithmetic:} Blank pixels in the image (see
-@ref{Blank pixels}) will be stored based on the data type. When the input
-is floating point type, blank values are NaN. One aspect of NaN values is
-that by definition they will fail on @emph{any} comparison. Hence both
-equal and not-equal operators will fail when both their operands are NaN!
-Therefore, the only way to guarantee selection of blank pixels is through
-the @command{isblank} operator explained above.
+@strong{Blank pixels in Arithmetic:} Blank pixels in the image (see @ref{Blank 
pixels}) will be stored based on the data type.
+When the input is floating point type, blank values are NaN.
+One aspect of NaN values is that by definition they will fail on @emph{any} 
comparison.
+Hence both equal and not-equal operators will fail when both their operands 
are NaN!
+Therefore, the only way to guarantee selection of blank pixels is through the 
@command{isblank} operator explained above.
 
-One way you can exploit this property of the NaN value to your advantage is
-when you want a fully zero-valued image (even over the blank pixels) based
-on an already existing image (with same size and world coordinate system
-settings). The following command will produce this for you:
+One way you can exploit this property of the NaN value to your advantage is 
when you want a fully zero-valued image (even over the blank pixels) based on 
an already existing image (with same size and world coordinate system settings).
+The following command will produce this for you:
 
 @example
 $ astarithmetic input.fits nan eq --output=all-zeros.fits
 @end example
 
 @noindent
-Note that on the command-line you can write NaN in any case (for example
-@command{NaN}, or @command{NAN} are also acceptable). Reading NaN as a
-floating point number in Gnuastro isn't case-sensitive.
+Note that on the command-line you can write NaN in any case (for example 
@command{NaN}, or @command{NAN} are also acceptable).
+Reading NaN as a floating point number in Gnuastro isn't case-sensitive.
 @end cartouche
 
 
 @node Invoking astarithmetic,  , Arithmetic operators, Arithmetic
 @subsection Invoking Arithmetic
 
-Arithmetic will do pixel to pixel arithmetic operations on the individual
-pixels of input data and/or numbers. For the full list of operators with
-explanations, please see @ref{Arithmetic operators}. Any operand that only
-has a single element (number, or single pixel FITS image) will be read as a
-number, the rest of the inputs must have the same dimensions. The general
-template is:
+Arithmetic will do pixel to pixel arithmetic operations on the individual 
pixels of input data and/or numbers.
+For the full list of operators with explanations, please see @ref{Arithmetic 
operators}.
+Any operand that only has a single element (number, or single pixel FITS 
image) will be read as a number, the rest of the inputs must have the same 
dimensions.
+The general template is:
 
 @example
 $ astarithmetic [OPTION...] ASTRdata1 [ASTRdata2] OPERATOR ...
@@ -13710,92 +10164,60 @@ $ astarithmetic img1.fits img2.fits img3.fits median  
              \
                 -h0 -h1 -h2 --out=median.fits
 @end example
 
-Arithmetic's notation for giving operands to operators is fully described
-in @ref{Reverse polish notation}. The output dataset is last remaining
-operand on the stack. When the output dataset a single number, it will be
-printed on the command-line. When the output is an array, it will be stored
-as a file.
-
-The name of the final file can be specified with the @option{--output}
-option, but if its not given, Arithmetic will use ``automatic output'' on
-the name of the first FITS image encountered to generate an output file
-name, see @ref{Automatic output}. By default, if the output file already
-exists, it will be deleted before Arithmetic starts operation. However,
-this can be disabled with the @option{--dontdelete} option (see below). At
-any point during Arithmetic's operation, you can also write the top operand
-on the stack to a file, using the @code{tofile} or @code{tofilefree}
-operators, see @ref{Arithmetic operators}.
-
-By default, the world coordinate system (WCS) information of the output
-dataset will be taken from the first input image (that contains a WCS) on
-the command-line. This can be modified with the @option{--wcsfile} and
-@option{--wcshdu} options described below. When the @option{--quiet} option
-isn't given, the name and extension of the dataset used for the output's
-WCS is printed on the command-line.
-
-Through operators like those starting with @code{collapse-}, the
-dimensionality of the inputs may not be the same as the outputs. By
-default, when the output is 1D, Arithmetic will write it as a table, not an
-image/array. The format of the output table (plain text or FITS ASCII or
-binary) can be set with the @option{--tableformat} option, see @ref{Input
-output options}). You can disable this feature (write 1D arrays as FITS
-images/arrays, or to the standard output) with the @option{--onedasimage}
-or @option{--onedonstdout} options.
-
-See @ref{Common options} for a review of the options in all Gnuastro
-programs. Arithmetic just redefines the @option{--hdu} and
-@option{--dontdelete} options as explained below.
+Arithmetic's notation for giving operands to operators is fully described in 
@ref{Reverse polish notation}.
+The output dataset is last remaining operand on the stack.
+When the output dataset a single number, it will be printed on the 
command-line.
+When the output is an array, it will be stored as a file.
+
+The name of the final file can be specified with the @option{--output} option, 
but if its not given, Arithmetic will use ``automatic output'' on the name of 
the first FITS image encountered to generate an output file name, see 
@ref{Automatic output}.
+By default, if the output file already exists, it will be deleted before 
Arithmetic starts operation.
+However, this can be disabled with the @option{--dontdelete} option (see 
below).
+At any point during Arithmetic's operation, you can also write the top operand 
on the stack to a file, using the @code{tofile} or @code{tofilefree} operators, 
see @ref{Arithmetic operators}.
+
+By default, the world coordinate system (WCS) information of the output 
dataset will be taken from the first input image (that contains a WCS) on the 
command-line.
+This can be modified with the @option{--wcsfile} and @option{--wcshdu} options 
described below.
+When the @option{--quiet} option isn't given, the name and extension of the 
dataset used for the output's WCS is printed on the command-line.
+
+Through operators like those starting with @code{collapse-}, the 
dimensionality of the inputs may not be the same as the outputs.
+By default, when the output is 1D, Arithmetic will write it as a table, not an 
image/array.
+The format of the output table (plain text or FITS ASCII or binary) can be set 
with the @option{--tableformat} option, see @ref{Input output options}).
+You can disable this feature (write 1D arrays as FITS images/arrays, or to the 
standard output) with the @option{--onedasimage} or @option{--onedonstdout} 
options.
+
+See @ref{Common options} for a review of the options in all Gnuastro programs.
+Arithmetic just redefines the @option{--hdu} and @option{--dontdelete} options 
as explained below.
 
 @table @option
 
 @item -h INT/STR
 @itemx --hdu INT/STR
-The header data unit of the input FITS images, see @ref{Input output
-options}. Unlike most options in Gnuastro (which will ultimately only have
-one value for this option), Arithmetic allows @option{--hdu} to be called
-multiple times and the value of each invocation will be stored separately
-(for the unlimited number of input images you would like to use). Recall
-that for other programs this (common) option only takes a single value. So
-in other programs, if you specify it multiple times on the command-line,
-only the last value will be used and in the configuration files, it will be
-ignored if it already has a value.
-
-The order of the values to @option{--hdu} has to be in the same order as
-input FITS images. Options are first read from the command-line (from left
-to right), then top-down in each configuration file, see @ref{Configuration
-file precedence}.
-
-If the number of HDUs is less than the number of input images,
-Arithmetic will abort and notify you. However, if there are more HDUs
-than FITS images, there is no problem: they will be used in the given
-order (every time a FITS image comes up on the stack) and the extra
-HDUs will be ignored in the end. So there is no problem with having
-extra HDUs in the configuration files and by default several HDUs with
-a value of @option{0} are kept in the system-wide configuration file
-when you install Gnuastro.
+The header data unit of the input FITS images, see @ref{Input output options}.
+Unlike most options in Gnuastro (which will ultimately only have one value for 
this option), Arithmetic allows @option{--hdu} to be called multiple times and 
the value of each invocation will be stored separately (for the unlimited 
number of input images you would like to use).
+Recall that for other programs this (common) option only takes a single value.
+So in other programs, if you specify it multiple times on the command-line, 
only the last value will be used and in the configuration files, it will be 
ignored if it already has a value.
+
+The order of the values to @option{--hdu} has to be in the same order as input 
FITS images.
+Options are first read from the command-line (from left to right), then 
top-down in each configuration file, see @ref{Configuration file precedence}.
+
+If the number of HDUs is less than the number of input images, Arithmetic will 
abort and notify you.
+However, if there are more HDUs than FITS images, there is no problem: they 
will be used in the given order (every time a FITS image comes up on the stack) 
and the extra HDUs will be ignored in the end.
+So there is no problem with having extra HDUs in the configuration files and 
by default several HDUs with a value of @option{0} are kept in the system-wide 
configuration file when you install Gnuastro.
 
 @item -g INT/STR
 @itemx --globalhdu INT/STR
-Use the value to this option as the HDU of all input FITS files. This
-option is very convenient when you have many input files and the dataset of
-interest is in the same HDU of all the files. When this option is called,
-any values given to the @option{--hdu} option (explained above) are ignored
-and will not be used.
+Use the value to this option as the HDU of all input FITS files.
+This option is very convenient when you have many input files and the dataset 
of interest is in the same HDU of all the files.
+When this option is called, any values given to the @option{--hdu} option 
(explained above) are ignored and will not be used.
 
 @item -w STR
 @itemx --wcsfile STR
-FITS Filename containing the WCS structure that must be written to the
-output. The HDU/extension should be specified with @option{--wcshdu}.
+FITS Filename containing the WCS structure that must be written to the output.
+The HDU/extension should be specified with @option{--wcshdu}.
 
-When this option is used, the respective WCS will be read before any
-processing is done on the command-line and directly used in the final
-output. If the given file doesn't have any WCS, then the default WCS (first
-file on the command-line with WCS) will be used in the output.
+When this option is used, the respective WCS will be read before any 
processing is done on the command-line and directly used in the final output.
+If the given file doesn't have any WCS, then the default WCS (first file on 
the command-line with WCS) will be used in the output.
 
-This option will mostly be used when the default file (first of the set of
-inputs) is not the one containing your desired WCS. But with this option,
-you can also use Arithmetic to rewrite/change the WCS of an existing FITS
-dataset from another file:
+This option will mostly be used when the default file (first of the set of 
inputs) is not the one containing your desired WCS.
+But with this option, you can also use Arithmetic to rewrite/change the WCS of 
an existing FITS dataset from another file:
 
 @example
 $ astarithmetic data.fits --wcsfile=other.fits -ofinal.fits
@@ -13803,59 +10225,41 @@ $ astarithmetic data.fits --wcsfile=other.fits 
-ofinal.fits
 
 @item -W STR
 @itemx --wcshdu STR
-HDU/extension to read the WCS within the file given to
-@option{--wcsfile}. For more, see the description of @option{--wcsfile}.
+HDU/extension to read the WCS within the file given to @option{--wcsfile}.
+For more, see the description of @option{--wcsfile}.
 
 @item -O
 @itemx --onedasimage
-When final dataset to write as output only has one dimension, write it as a
-FITS image/array. By default, if the output is 1D, it will be written as a
-table, see above.
+When final dataset to write as output only has one dimension, write it as a 
FITS image/array.
+By default, if the output is 1D, it will be written as a table, see above.
 
 @item -s
 @itemx --onedonstdout
-When final dataset to write as output only has one dimension, print it on
-the standard output, not in a file. By default, if the output is 1D, it
-will be written as a table, see above.
+When final dataset to write as output only has one dimension, print it on the 
standard output, not in a file.
+By default, if the output is 1D, it will be written as a table, see above.
 
 @item -D
 @itemx --dontdelete
-Don't delete the output file, or files given to the @code{tofile} or
-@code{tofilefree} operators, if they already exist. Instead append the
-desired datasets to the extensions that already exist in the respective
-file. Note it doesn't matter if the final output file name is given with
-the @option{--output} option, or determined automatically.
-
-Arithmetic treats this option differently from its default operation in
-other Gnuastro programs (see @ref{Input output options}). If the output
-file exists, when other Gnuastro programs are called with
-@option{--dontdelete}, they simply complain and abort. But when Arithmetic
-is called with @option{--dontdelete}, it will appended the dataset(s) to
-the existing extension(s) in the file.
+Don't delete the output file, or files given to the @code{tofile} or 
@code{tofilefree} operators, if they already exist.
+Instead append the desired datasets to the extensions that already exist in 
the respective file.
+Note it doesn't matter if the final output file name is given with the 
@option{--output} option, or determined automatically.
+
+Arithmetic treats this option differently from its default operation in other 
Gnuastro programs (see @ref{Input output options}).
+If the output file exists, when other Gnuastro programs are called with 
@option{--dontdelete}, they simply complain and abort.
+But when Arithmetic is called with @option{--dontdelete}, it will appended the 
dataset(s) to the existing extension(s) in the file.
 @end table
 
-Arithmetic accepts two kinds of input: images and numbers. Images are
-considered to be any of the inputs that is a file name of a recognized type
-(see @ref{Arguments}) and has more than one element/pixel. Numbers on the
-command-line will be read into the smallest type (see @ref{Numeric data
-types}) that can store them, so @command{-2} will be read as a @code{char}
-type (which is signed on most systems and can thus keep negative values),
-@command{2500} will be read as an @code{unsigned short} (all positive
-numbers will be read as unsigned), while @code{3.1415926535897} will be
-read as a @code{double} and @code{3.14} will be read as a @code{float}. To
-force a number to be read as float, add a @code{f} after it, so
-@command{5f} will be added to the stack as @code{float} (see @ref{Reverse
-polish notation}).
+Arithmetic accepts two kinds of input: images and numbers.
+Images are considered to be any of the inputs that is a file name of a 
recognized type (see @ref{Arguments}) and has more than one element/pixel.
+Numbers on the command-line will be read into the smallest type (see 
@ref{Numeric data types}) that can store them, so @command{-2} will be read as 
a @code{char} type (which is signed on most systems and can thus keep negative 
values), @command{2500} will be read as an @code{unsigned short} (all positive 
numbers will be read as unsigned), while @code{3.1415926535897} will be read as 
a @code{double} and @code{3.14} will be read as a @code{float}.
+To force a number to be read as float, add a @code{f} after it, so 
@command{5f} will be added to the stack as @code{float} (see @ref{Reverse 
polish notation}).
 
-Unless otherwise stated (in @ref{Arithmetic operators}), the operators can
-deal with numeric multiple data types (see @ref{Numeric data types}). For
-example in ``@command{a.fits b.fits +}'', the image types can be
-@code{long} and @code{float}. In such cases, C's internal type conversion
-will be used. The output type will be set to the higher-ranking type of the
-two inputs. Unsigned integer types have smaller ranking than their signed
-counterparts and floating point types have higher ranking than the integer
-types. So the internal C type conversions done in the example above are
-equivalent to this piece of C:
+Unless otherwise stated (in @ref{Arithmetic operators}), the operators can 
deal with numeric multiple data types (see @ref{Numeric data types}).
+For example in ``@command{a.fits b.fits +}'', the image types can be 
@code{long} and @code{float}.
+In such cases, C's internal type conversion will be used.
+The output type will be set to the higher-ranking type of the two inputs.
+Unsigned integer types have smaller ranking than their signed counterparts and 
floating point types have higher ranking than the integer types.
+So the internal C type conversions done in the example above are equivalent to 
this piece of C:
 
 @example
 size_t i;
@@ -13865,43 +10269,27 @@ for(i=0;i<100;++i) out[i]=a[i]+b[i];
 @end example
 
 @noindent
-Relying on the default C type conversion significantly speeds up the
-processing and also requires less RAM (when using very large images).
+Relying on the default C type conversion significantly speeds up the 
processing and also requires less RAM (when using very large images).
 
-Some operators can only work on integer types (of any length, for example
-bitwise operators) while others only work on floating point types,
-(currently only the @code{pow} operator). In such cases, if the operand
-type(s) are different, an error will be printed. Arithmetic also comes with
-internal type conversion operators which you can use to convert the data
-into the appropriate type, see @ref{Arithmetic operators}.
+Some operators can only work on integer types (of any length, for example 
bitwise operators) while others only work on floating point types, (currently 
only the @code{pow} operator).
+In such cases, if the operand type(s) are different, an error will be printed.
+Arithmetic also comes with internal type conversion operators which you can 
use to convert the data into the appropriate type, see @ref{Arithmetic 
operators}.
 
 @cindex Options
-The hyphen (@command{-}) can be used both to specify options (see
-@ref{Options}) and also to specify a negative number which might be
-necessary in your arithmetic. In order to enable you to do this, Arithmetic
-will first parse all the input strings and if the first character after a
-hyphen is a digit, then that hyphen is temporarily replaced by the vertical
-tab character which is not commonly used. The arguments are then parsed and
-these strings will not be specified as an option. Then the given arguments
-are parsed and any vertical tabs are replaced back with a hyphen so they
-can be read as negative numbers. Therefore, as long as the names of the
-files you want to work on, don't start with a vertical tab followed by a
-digit, there is no problem. An important consequence of this implementation
-is that you should not write negative fractions like this: @command{-.3},
-instead write them as @command{-0.3}.
+The hyphen (@command{-}) can be used both to specify options (see 
@ref{Options}) and also to specify a negative number which might be necessary 
in your arithmetic.
+In order to enable you to do this, Arithmetic will first parse all the input 
strings and if the first character after a hyphen is a digit, then that hyphen 
is temporarily replaced by the vertical tab character which is not commonly 
used.
+The arguments are then parsed and these strings will not be specified as an 
option.
+Then the given arguments are parsed and any vertical tabs are replaced back 
with a hyphen so they can be read as negative numbers.
+Therefore, as long as the names of the files you want to work on, don't start 
with a vertical tab followed by a digit, there is no problem.
+An important consequence of this implementation is that you should not write 
negative fractions like this: @command{-.3}, instead write them as 
@command{-0.3}.
 
 @cindex AWK
 @cindex GNU AWK
-Without any images, Arithmetic will act like a simple calculator and print
-the resulting output number on the standard output like the first example
-above. If you really want such calculator operations on the command-line,
-AWK (GNU AWK is the most common implementation) is much faster, easier and
-much more powerful. For example, the numerical one-line example above can
-be done with the following command. In general AWK is a fantastic tool and
-GNU AWK has a wonderful manual
-(@url{https://www.gnu.org/software/gawk/manual/}). So if you often confront
-situations like this, or have to work with large text tables/catalogs, be
-sure to checkout AWK and simplify your life.
+Without any images, Arithmetic will act like a simple calculator and print the 
resulting output number on the standard output like the first example above.
+If you really want such calculator operations on the command-line, AWK (GNU 
AWK is the most common implementation) is much faster, easier and much more 
powerful.
+For example, the numerical one-line example above can be done with the 
following command.
+In general AWK is a fantastic tool and GNU AWK has a wonderful manual 
(@url{https://www.gnu.org/software/gawk/manual/}).
+So if you often confront situations like this, or have to work with large text 
tables/catalogs, be sure to checkout AWK and simplify your life.
 
 @example
 $ echo "" | awk '@{print (10.32-3.84)^2.7@}'
@@ -13930,25 +10318,15 @@ $ echo "" | awk '@{print (10.32-3.84)^2.7@}'
 @cindex Weighted average
 @cindex Average, weighted
 @cindex Kernel, convolution
-On an image, convolution can be thought of as a process to blur or remove
-the contrast in an image. If you are already familiar with the concept and
-just want to run Convolve, you can jump to @ref{Convolution kernel} and
-@ref{Invoking astconvolve} and skip the lengthy introduction on the basic
-definitions and concepts of convolution.
-
-There are generally two methods to convolve an image. The first and
-more intuitive one is in the ``spatial domain'' or using the actual
-image pixel values, see @ref{Spatial domain convolution}. The second
-method is when we manipulate the ``frequency domain'', or work on the
-magnitudes of the different frequencies that constitute the image, see
-@ref{Frequency domain and Fourier operations}. Understanding
-convolution in the spatial domain is more intuitive and thus
-recommended if you are just starting to learn about
-convolution. However, getting a good grasp of the frequency domain is
-a little more involved and needs some concentration and some
-mathematical proofs. However, its reward is a faster operation and
-more importantly a very fundamental understanding of this very
-important operation.
+On an image, convolution can be thought of as a process to blur or remove the 
contrast in an image.
+If you are already familiar with the concept and just want to run Convolve, 
you can jump to @ref{Convolution kernel} and @ref{Invoking astconvolve} and 
skip the lengthy introduction on the basic definitions and concepts of 
convolution.
+
+There are generally two methods to convolve an image.
+The first and more intuitive one is in the ``spatial domain'' or using the 
actual image pixel values, see @ref{Spatial domain convolution}.
+The second method is when we manipulate the ``frequency domain'', or work on 
the magnitudes of the different frequencies that constitute the image, see 
@ref{Frequency domain and Fourier operations}.
+Understanding convolution in the spatial domain is more intuitive and thus 
recommended if you are just starting to learn about convolution.
+However, getting a good grasp of the frequency domain is a little more 
involved and needs some concentration and some mathematical proofs.
+However, its reward is a faster operation and more importantly a very 
fundamental understanding of this very important operation.
 
 @cindex Detection
 @cindex Atmosphere
@@ -13956,27 +10334,16 @@ important operation.
 @cindex Cosmic rays
 @cindex Pixel mixing
 @cindex Mixing pixel values
-Convolution of an image will generally result in blurring the image because
-it mixes pixel values. In other words, if the image has sharp differences
-in neighboring pixel values@footnote{In astronomy, the only major time we
-confront such sharp borders in signal are cosmic rays. All other sources of
-signal in an image are already blurred by the atmosphere or the optics of
-the instrument.}, those sharp differences will become smoother. This has
-very good consequences in detection of signal in noise for example. In an
-actual observed image, the variation in neighboring pixel values due to
-noise can be very high. But after convolution, those variations will
-decrease and we have a better hope in detecting the possible underlying
-signal. Another case where convolution is extensively used is in mock
-images and modeling in general, convolution can be used to simulate the
-effect of the atmosphere or the optical system on the mock profiles that we
-create, see @ref{PSF}. Convolution is a very interesting and important
-topic in any form of signal analysis (including astronomical
-observations). So we have thoroughly@footnote{A mathematician will
-certainly consider this explanation is incomplete and inaccurate. However
-this text is written for an understanding on the operations that are done
-on a real (not complex, discrete and noisy) astronomical image, not any
-general form of abstract function} explained the concepts behind it in the
-following sub-sections.
+Convolution of an image will generally result in blurring the image because it 
mixes pixel values.
+In other words, if the image has sharp differences in neighboring pixel 
values@footnote{In astronomy, the only major time we confront such sharp 
borders in signal are cosmic rays.
+All other sources of signal in an image are already blurred by the atmosphere 
or the optics of the instrument.}, those sharp differences will become smoother.
+This has very good consequences in detection of signal in noise for example.
+In an actual observed image, the variation in neighboring pixel values due to 
noise can be very high.
+But after convolution, those variations will decrease and we have a better 
hope in detecting the possible underlying signal.
+Another case where convolution is extensively used is in mock images and 
modeling in general, convolution can be used to simulate the effect of the 
atmosphere or the optical system on the mock profiles that we create, see 
@ref{PSF}.
+Convolution is a very interesting and important topic in any form of signal 
analysis (including astronomical observations).
+So we have thoroughly@footnote{A mathematician will certainly consider this 
explanation is incomplete and inaccurate.
+However this text is written for an understanding on the operations that are 
done on a real (not complex, discrete and noisy) astronomical image, not any 
general form of abstract function} explained the concepts behind it in the 
following sub-sections.
 
 @menu
 * Spatial domain convolution::  Only using the input image values.
@@ -13989,15 +10356,9 @@ following sub-sections.
 @node Spatial domain convolution, Frequency domain and Fourier operations, 
Convolve, Convolve
 @subsection Spatial domain convolution
 
-The pixels in an input image represent different ``spatial'' positions,
-therefore when convolution is done only using the actual input pixel
-values, we name the process as being done in the ``Spatial domain''. In
-particular this is in contrast to the ``frequency domain'' that we will
-discuss later in @ref{Frequency domain and Fourier operations}. In the
-spatial domain (and in realistic situations where the image and the
-convolution kernel don't extend to infinity), convolution is the process of
-changing the value of one pixel to the @emph{weighted} average of all the
-pixels in its @emph{neighborhood}.
+The pixels in an input image represent different ``spatial'' positions, 
therefore when convolution is done only using the actual input pixel values, we 
name the process as being done in the ``Spatial domain''.
+In particular this is in contrast to the ``frequency domain'' that we will 
discuss later in @ref{Frequency domain and Fourier operations}.
+In the spatial domain (and in realistic situations where the image and the 
convolution kernel don't extend to infinity), convolution is the process of 
changing the value of one pixel to the @emph{weighted} average of all the 
pixels in its @emph{neighborhood}.
 
 The `neighborhood' of each pixel (how many pixels in which direction) and
 the `weight' function (how much each neighboring pixel should contribute
@@ -14012,43 +10373,29 @@ as a ``kernel''@footnote{Also known as filter, here 
we will use `kernel'.}.
 @node Convolution process, Edges in the spatial domain, Spatial domain 
convolution, Spatial domain convolution
 @subsubsection Convolution process
 
-In convolution, the kernel specifies the weight and positions of the
-neighbors of each pixel. To find the convolved value of a pixel, the
-central pixel of the kernel is placed on that pixel. The values of each
-overlapping pixel in the kernel and image are multiplied by each other and
-summed for all the kernel pixels. To have one pixel in the center, the
-sides of the convolution kernel have to be an odd number. This process
-effectively mixes the pixel values of each pixel with its neighbors,
-resulting in a blurred image compared to the sharper input image.
+In convolution, the kernel specifies the weight and positions of the neighbors 
of each pixel.
+To find the convolved value of a pixel, the central pixel of the kernel is 
placed on that pixel.
+The values of each overlapping pixel in the kernel and image are multiplied by 
each other and summed for all the kernel pixels.
+To have one pixel in the center, the sides of the convolution kernel have to 
be an odd number.
+This process effectively mixes the pixel values of each pixel with its 
neighbors, resulting in a blurred image compared to the sharper input image.
 
 @cindex Linear spatial filtering
-Formally, convolution is one kind of linear `spatial filtering' in image
-processing texts. If we assume that the kernel has @mymath{2a+1} and
-@mymath{2b+1} pixels on each side, the convolved value of a pixel placed at
-@mymath{x} and @mymath{y} (@mymath{C_{x,y}}) can be calculated from the
-neighboring pixel values in the input image (@mymath{I}) and the kernel
-(@mymath{K}) from
+Formally, convolution is one kind of linear `spatial filtering' in image 
processing texts.
+If we assume that the kernel has @mymath{2a+1} and @mymath{2b+1} pixels on 
each side, the convolved value of a pixel placed at @mymath{x} and @mymath{y} 
(@mymath{C_{x,y}}) can be calculated from the neighboring pixel values in the 
input image (@mymath{I}) and the kernel (@mymath{K}) from
 
 @dispmath{C_{x,y}=\sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t}.}
 
 @cindex Correlation
 @cindex Convolution
-Any pixel coordinate that is outside of the image in the equation above
-will be considered to be zero. When the kernel is symmetric about its
-center the blurred image has the same orientation as the original
-image. However, if the kernel is not symmetric, the image will be affected
-in the opposite manner, this is a natural consequence of the definition of
-spatial filtering. In order to avoid this we can rotate the kernel about
-its center by 180 degrees so the convolved output can have the same
-original orientation. Technically speaking, only if the kernel is flipped
-the process is known @emph{Convolution}. If it isn't it is known as
-@emph{Correlation}.
+Any pixel coordinate that is outside of the image in the equation above will 
be considered to be zero.
+When the kernel is symmetric about its center the blurred image has the same 
orientation as the original image.
+However, if the kernel is not symmetric, the image will be affected in the 
opposite manner, this is a natural consequence of the definition of spatial 
filtering.
+In order to avoid this we can rotate the kernel about its center by 180 
degrees so the convolved output can have the same original orientation.
+Technically speaking, only if the kernel is flipped the process is known 
@emph{Convolution}.
+If it isn't it is known as @emph{Correlation}.
 
-To be a weighted average, the sum of the weights (the pixels in the kernel)
-have to be unity. This will have the consequence that the convolved image
-of an object and unconvolved object will have the same brightness (see
-@ref{Flux Brightness and magnitude}), which is natural, because convolution
-should not eat up the object photons, it only disperses them.
+To be a weighted average, the sum of the weights (the pixels in the kernel) 
have to be unity.
+This will have the consequence that the convolved image of an object and 
unconvolved object will have the same brightness (see @ref{Flux Brightness and 
magnitude}), which is natural, because convolution should not eat up the object 
photons, it only disperses them.
 
 
 
@@ -14057,93 +10404,53 @@ should not eat up the object photons, it only 
disperses them.
 @node Edges in the spatial domain,  , Convolution process, Spatial domain 
convolution
 @subsubsection Edges in the spatial domain
 
-In purely `linear' spatial filtering (convolution), there are problems on
-the edges of the input image. Here we will explain the problem in the
-spatial domain. For a discussion of this problem from the frequency domain
-perspective, see @ref{Edges in the frequency domain}. The problem
-originates from the fact that on the edges, in practice@footnote{Because we
-assumed the overlapping pixels outside the input image have a value of
-zero.}, the sum of the weights we use on the actual image pixels is not
-unity. For example, as discussed above, a profile in the center of an image
-will have the same brightness before and after convolution. However, for
-partially imaged profile on the edge of the image, the brightness (sum of
-its pixel fluxes within the image, see @ref{Flux Brightness and magnitude})
-will not be equal, some of the flux is going to be `eaten' by the edges.
-
-If you ran @command{$ make check} on the source files of Gnuastro, you can
-see the this effect by comparing the @file{convolve_frequency.fits} with
-@file{convolve_spatial.fits} in the @file{./tests/} directory. In the
-spatial domain, by default, no assumption will be made about pixels outside
-of the image or any blank pixels in the image. The problem explained above
-will also occur on the sides of blank regions (see @ref{Blank pixels}). The
-solution to this edge effect problem is only possible in the spatial
-domain. For pixels near the edge, we have to abandon the assumption that
-the sum of the kernel pixels is unity during the convolution
-process@footnote{ofcourse the sum of the kernel pixels still have to be
-unity in general.}. So taking @mymath{W} as the sum of the kernel pixels
-that overlapped with non-blank and in-image pixels, the equation in
-@ref{Convolution process} will become:
+In purely `linear' spatial filtering (convolution), there are problems on the 
edges of the input image.
+Here we will explain the problem in the spatial domain.
+For a discussion of this problem from the frequency domain perspective, see 
@ref{Edges in the frequency domain}.
+The problem originates from the fact that on the edges, in 
practice@footnote{Because we assumed the overlapping pixels outside the input 
image have a value of zero.}, the sum of the weights we use on the actual image 
pixels is not unity.
+For example, as discussed above, a profile in the center of an image will have 
the same brightness before and after convolution.
+However, for partially imaged profile on the edge of the image, the brightness 
(sum of its pixel fluxes within the image, see @ref{Flux Brightness and 
magnitude}) will not be equal, some of the flux is going to be `eaten' by the 
edges.
+
+If you ran @command{$ make check} on the source files of Gnuastro, you can see 
the this effect by comparing the @file{convolve_frequency.fits} with 
@file{convolve_spatial.fits} in the @file{./tests/} directory.
+In the spatial domain, by default, no assumption will be made about pixels 
outside of the image or any blank pixels in the image.
+The problem explained above will also occur on the sides of blank regions (see 
@ref{Blank pixels}).
+The solution to this edge effect problem is only possible in the spatial 
domain.
+For pixels near the edge, we have to abandon the assumption that the sum of 
the kernel pixels is unity during the convolution process@footnote{ofcourse the 
sum of the kernel pixels still have to be unity in general.}.
+So taking @mymath{W} as the sum of the kernel pixels that overlapped with 
non-blank and in-image pixels, the equation in @ref{Convolution process} will 
become:
 
 @dispmath{C_{x,y}= { \sum_{s=-a}^{a}\sum_{t=-b}^{b}K_{s,t}\times{}I_{x+s,y+t} 
\over W}.}
 
 @noindent
-In this manner, objects which are near the edges of the image or blank
-pixels will also have the same brightness (within the image) before and
-after convolution. This correction is applied by default in Convolve when
-convolving in the spatial domain. To disable it, you can use the
-@option{--noedgecorrection} option. In the frequency domain, there is no
-way to avoid this loss of flux near the edges of the image, see @ref{Edges
-in the frequency domain} for an interpretation from the frequency domain
-perspective.
+In this manner, objects which are near the edges of the image or blank pixels 
will also have the same brightness (within the image) before and after 
convolution.
+This correction is applied by default in Convolve when convolving in the 
spatial domain.
+To disable it, you can use the @option{--noedgecorrection} option.
+In the frequency domain, there is no way to avoid this loss of flux near the 
edges of the image, see @ref{Edges in the frequency domain} for an 
interpretation from the frequency domain perspective.
 
-Note that the edge effect discussed here is different from the one in
-@ref{If convolving afterwards}. In making mock images we want to
-simulate a real observation. In a real observation the images of the
-galaxies on the sides of the CCD are first blurred by the atmosphere
-and instrument, then imaged. So light from the parts of a galaxy which
-are immediately outside the CCD will affect the parts of the galaxy
-which are covered by the CCD. Therefore in modeling the observation,
-we have to convolve an image that is larger than the input image by
-exactly half of the convolution kernel. We can hence conclude that
-this correction for the edges is only useful when working on actual
-observed images (where we don't have any more data on the edges) and
-not in modeling.
+Note that the edge effect discussed here is different from the one in @ref{If 
convolving afterwards}.
+In making mock images we want to simulate a real observation.
+In a real observation the images of the galaxies on the sides of the CCD are 
first blurred by the atmosphere and instrument, then imaged.
+So light from the parts of a galaxy which are immediately outside the CCD will 
affect the parts of the galaxy which are covered by the CCD.
+Therefore in modeling the observation, we have to convolve an image that is 
larger than the input image by exactly half of the convolution kernel.
+We can hence conclude that this correction for the edges is only useful when 
working on actual observed images (where we don't have any more data on the 
edges) and not in modeling.
 
 
 
 @node Frequency domain and Fourier operations, Spatial vs. Frequency domain, 
Spatial domain convolution, Convolve
 @subsection Frequency domain and Fourier operations
 
-Getting a good grip on the frequency domain is usually not an easy
-job! So we have decided to give the issue a complete review
-here. Convolution in the frequency domain (see @ref{Convolution
-theorem}) heavily relies on the concepts of Fourier transform
-(@ref{Fourier transform}) and Fourier series (@ref{Fourier series}) so
-we will be investigating these important operations first. It has
-become something of a clich@'e for people to say that the Fourier
-series ``is a way to represent a (wave-like) function as the sum of
-simple sine waves'' (from Wikipedia). However, sines themselves are
-abstract functions, so this statement really adds no extra layer of
-physical insight.
-
-Before jumping head-first into the equations and proofs, we will begin
-with a historical background to see how the importance of frequencies
-actually roots in our ancient desire to see everything in terms of
-circles. A short review of how the complex plane should be
-interpreted is then given. Having paved the way with these two
-basics, we define the Fourier series and subsequently the Fourier
-transform. The final aim is to explain discrete Fourier transform,
-however some very important concepts need to be solidified first: The
-Dirac comb, convolution theorem and sampling theorem. So each of these
-topics are explained in their own separate sub-sub-section before
-going on to the discrete Fourier transform. Finally we revisit (after
-@ref{Edges in the spatial domain}) the problem of convolution on the
-edges, but this time in the frequency domain. Understanding the
-sampling theorem and the discrete Fourier transform is very important
-in order to be able to pull out valuable science from the discrete
-image pixels. Therefore we have included the mathematical proofs and
-figures so you can have a clear understanding of these very important
-concepts.
+Getting a good grip on the frequency domain is usually not an easy job! So we 
have decided to give the issue a complete review here.
+Convolution in the frequency domain (see @ref{Convolution theorem}) heavily 
relies on the concepts of Fourier transform (@ref{Fourier transform}) and 
Fourier series (@ref{Fourier series}) so we will be investigating these 
important operations first.
+It has become something of a clich@'e for people to say that the Fourier 
series ``is a way to represent a (wave-like) function as the sum of simple sine 
waves'' (from Wikipedia).
+However, sines themselves are abstract functions, so this statement really 
adds no extra layer of physical insight.
+
+Before jumping head-first into the equations and proofs, we will begin with a 
historical background to see how the importance of frequencies actually roots 
in our ancient desire to see everything in terms of circles.
+A short review of how the complex plane should be interpreted is then given.
+Having paved the way with these two basics, we define the Fourier series and 
subsequently the Fourier transform.
+The final aim is to explain discrete Fourier transform, however some very 
important concepts need to be solidified first: The Dirac comb, convolution 
theorem and sampling theorem.
+So each of these topics are explained in their own separate sub-sub-section 
before going on to the discrete Fourier transform.
+Finally we revisit (after @ref{Edges in the spatial domain}) the problem of 
convolution on the edges, but this time in the frequency domain.
+Understanding the sampling theorem and the discrete Fourier transform is very 
important in order to be able to pull out valuable science from the discrete 
image pixels.
+Therefore we have included the mathematical proofs and figures so you can have 
a clear understanding of these very important concepts.
 
 @menu
 * Fourier series historical background::  Historical background.
@@ -14160,27 +10467,15 @@ concepts.
 
 @node Fourier series historical background, Circles and the complex plane, 
Frequency domain and Fourier operations, Frequency domain and Fourier operations
 @subsubsection Fourier series historical background
-Ever since the ancient times, the circle has been (and still is) the
-simplest shape for abstract comprehension. All you need is a center
-point and a radius and you are done. All the points on a circle are at
-a fixed distance from the center. However, the moment you try to
-connect this elegantly simple and beautiful abstract construct (the
-circle) with the real world (for example compute its area or its
-circumference), things become really hard (ideally, impossible)
-because the irrational number @mymath{\pi} gets involved.
-
-The key to understanding the Fourier series (thus the Fourier
-transform and finally the Discrete Fourier Transform) is our ancient
-desire to express everything in terms of circles or the most
-exceptionally simple and elegant abstract human construct. Most people
-prefer to say the same thing in a more ahistorical manner: to break a
-function into sines and cosines. As the term ``ancient'' in the
-previous sentence implies, Jean-Baptiste Joseph Fourier (1768 -- 1830
-A.D.) was not the first person to do this. The main reason we know
-this process by his name today is that he came up with an ingenious
-method to find the necessary coefficients (radius of) and frequencies
-(``speed'' of rotation on) the circles for any generic (integrable)
-function.
+Ever since the ancient times, the circle has been (and still is) the simplest 
shape for abstract comprehension.
+All you need is a center point and a radius and you are done.
+All the points on a circle are at a fixed distance from the center.
+However, the moment you try to connect this elegantly simple and beautiful 
abstract construct (the circle) with the real world (for example compute its 
area or its circumference), things become really hard (ideally, impossible) 
because the irrational number @mymath{\pi} gets involved.
+
+The key to understanding the Fourier series (thus the Fourier transform and 
finally the Discrete Fourier Transform) is our ancient desire to express 
everything in terms of circles or the most exceptionally simple and elegant 
abstract human construct.
+Most people prefer to say the same thing in a more ahistorical manner: to 
break a function into sines and cosines.
+As the term ``ancient'' in the previous sentence implies, Jean-Baptiste Joseph 
Fourier (1768 -- 1830 A.D.) was not the first person to do this.
+The main reason we know this process by his name today is that he came up with 
an ingenious method to find the necessary coefficients (radius of) and 
frequencies (``speed'' of rotation on) the circles for any generic (integrable) 
function.
 
 @float Figure,epicycle
 
@@ -14188,79 +10483,43 @@ function.
 @c jump out of the text width.
 @cindex Qutb al-Din al-Shirazi
 @cindex al-Shirazi, Qutb al-Din
-@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along
-with two demonstrations of breaking a generic function using epicycles.}
-@caption{Epicycles and the Fourier series. Left: A demonstration of
-Mercury's epicycles relative to the ``center of the world'' by Qutb al-Din
-al-Shirazi (1236 -- 1311 A.D.) retrieved
-@url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from
-Wikipedia}. 
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif,
-Middle} and Right: How adding more epicycles (or terms in the Fourier
-series) will approximate functions. The
-@url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif,
-right} animation is also available.}
+@image{gnuastro-figures/epicycles, 15.2cm, , Middle ages epicycles along with 
two demonstrations of breaking a generic function using epicycles.}
+@caption{Epicycles and the Fourier series.
+Left: A demonstration of Mercury's epicycles relative to the ``center of the 
world'' by Qutb al-Din al-Shirazi (1236 -- 1311 A.D.) retrieved 
@url{https://commons.wikimedia.org/wiki/File:Ghotb2.jpg, from Wikipedia}.
+@url{https://commons.wikimedia.org/wiki/File:Fourier_series_square_wave_circles_animation.gif,
 Middle} and
+Right: How adding more epicycles (or terms in the Fourier series) will 
approximate functions.
+The 
@url{https://commons.wikimedia.org/wiki/File:Fourier_series_sawtooth_wave_circles_animation.gif,
 right} animation is also available.}
 @end float
 
-Like most aspects of mathematics, this process of interpreting
-everything in terms of circles, began for astronomical purposes. When
-astronomers noticed that the orbit of Mars and other outer planets,
-did not appear to be a simple circle (as everything should have been
-in the heavens). At some point during their orbit, the revolution of
-these planets would become slower, stop, go back a little (in what is
-known as the retrograde motion) and then continue going forward
-again.
-
-The correction proposed by Ptolemy (90 -- 168 A.D.) was the most
-agreed upon. He put the planets on Epicycles or circles whose center
-itself rotates on a circle whose center is the earth. Eventually, as
-observations became more and more precise, it was necessary to add
-more and more epicycles in order to explain the complex motions of the
-planets@footnote{See the Wikipedia page on ``Deferent and epicycle''
-for a more complete historical review.}. @ref{epicycle}(Left) shows an
-example depiction of the epicycles of Mercury in the late 13th
-century.
+Like most aspects of mathematics, this process of interpreting everything in 
terms of circles, began for astronomical purposes.
+When astronomers noticed that the orbit of Mars and other outer planets, did 
not appear to be a simple circle (as everything should have been in the 
heavens).
+At some point during their orbit, the revolution of these planets would become 
slower, stop, go back a little (in what is known as the retrograde motion) and 
then continue going forward again.
+
+The correction proposed by Ptolemy (90 -- 168 A.D.) was the most agreed upon.
+He put the planets on Epicycles or circles whose center itself rotates on a 
circle whose center is the earth.
+Eventually, as observations became more and more precise, it was necessary to 
add more and more epicycles in order to explain the complex motions of the 
planets@footnote{See the Wikipedia page on ``Deferent and epicycle'' for a more 
complete historical review.}.
+@ref{epicycle}(Left) shows an example depiction of the epicycles of Mercury in 
the late 13th century.
 
 @cindex Aristarchus of Samos
-Of course we now know that if they had abdicated the Earth from its
-throne in the center of the heavens and allowed the Sun to take its
-place, everything would become much simpler and true. But there wasn't
-enough observational evidence for changing the ``professional
-consensus'' of the time to this radical view suggested by a small
-minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be
-one of the first people to suggest the Sun being in the center of the
-universe. This approach to science (that the standard model is defined
-by consensus) and the fact that this consensus might be completely
-wrong still applies equally well to our models of particle physics and
-cosmology today.}. So the pre-Galilean astronomers chose to keep Earth
-in the center and find a correction to the models (while keeping the
-heavens a purely ``circular'' order).
-
-The main reason we are giving this historical background which might
-appear off topic is to give historical evidence that while such
-``approximations'' do work and are very useful for pragmatic reasons
-(like measuring the calendar from the movement of astronomical
-bodies). They offer no physical insight. The astronomers who were
-involved with the Ptolemaic world view had to add a huge number of
-epicycles during the centuries after Ptolemy in order to explain more
-accurate observations. Finally the death knell of this world-view was
-Galileo's observations with his new instrument (the telescope). So the
-physical insight, which is what Astronomers and Physicists are
-interested in (as opposed to Mathematicians and Engineers who just
-like proving and optimizing or calculating!) comes from being creative
-and not limiting our selves to such approximations. Even when they
-work.
+Of course we now know that if they had abdicated the Earth from its throne in 
the center of the heavens and allowed the Sun to take its place, everything 
would become much simpler and true.
+But there wasn't enough observational evidence for changing the ``professional 
consensus'' of the time to this radical view suggested by a small 
minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be one of 
the first people to suggest the Sun being in the center of the universe.
+This approach to science (that the standard model is defined by consensus) and 
the fact that this consensus might be completely wrong still applies equally 
well to our models of particle physics and cosmology today.}.
+So the pre-Galilean astronomers chose to keep Earth in the center and find a 
correction to the models (while keeping the heavens a purely ``circular'' 
order).
+
+The main reason we are giving this historical background which might appear 
off topic is to give historical evidence that while such ``approximations'' do 
work and are very useful for pragmatic reasons (like measuring the calendar 
from the movement of astronomical bodies).
+They offer no physical insight.
+The astronomers who were involved with the Ptolemaic world view had to add a 
huge number of epicycles during the centuries after Ptolemy in order to explain 
more accurate observations.
+Finally the death knell of this world-view was Galileo's observations with his 
new instrument (the telescope).
+So the physical insight, which is what Astronomers and Physicists are 
interested in (as opposed to Mathematicians and Engineers who just like proving 
and optimizing or calculating!) comes from being creative and not limiting our 
selves to such approximations.
+Even when they work.
 
 @node Circles and the complex plane, Fourier series, Fourier series historical 
background, Frequency domain and Fourier operations
 @subsubsection Circles and the complex plane
-Before going onto the derivation, it is also useful to review how the
-complex numbers and their plane relate to the circles we talked about
-above. The two schematics in the middle and right of @ref{epicycle}
-show how a 1D function of time can be made using the 2D real and
-imaginary surface. Seeing the animation in Wikipedia will really help
-in understanding this important concept. At each point in time, we
-take the vertical coordinate of the point and use it to find the value
-of the function at that point in time. @ref{iandtime} shows this
-relation with the axes marked.
+Before going onto the derivation, it is also useful to review how the complex 
numbers and their plane relate to the circles we talked about above.
+The two schematics in the middle and right of @ref{epicycle} show how a 1D 
function of time can be made using the 2D real and imaginary surface.
+Seeing the animation in Wikipedia will really help in understanding this 
important concept.
+At each point in time, we take the vertical coordinate of the point and use it 
to find the value of the function at that point in time.
+@ref{iandtime} shows this relation with the axes marked.
 
 @cindex Roger Cotes
 @cindex Cotes, Roger
@@ -14270,81 +10529,46 @@ relation with the axes marked.
 @cindex Euler, Leonhard
 @cindex Abraham de Moivre
 @cindex de Moivre, Abraham
-Leonhard Euler@footnote{Other forms of this equation were known before
-Euler. For example in 1707 A.D. (the year of Euler's birth) Abraham de
-Moivre (1667 -- 1754 A.D.)  showed that
-@mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}. In 1714 A.D., Roger Cotes
-(1682 -- 1716 A.D. a colleague of Newton who proofread the second edition
-of Principia) showed that: @mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 --
-1783 A.D.)  showed that the complex exponential (@mymath{e^{iv}} where
-@mymath{v} is real) is periodic and can be written as:
-@mymath{e^{iv}=\cos{v}+isin{v}}. Therefore
-@mymath{e^{iv+2\pi}=e^{iv}}. Later, Caspar Wessel (mathematician and
-cartographer 1745 -- 1818 A.D.)  showed how complex numbers can be
-displayed as vectors on a plane. Euler's identity might seem counter
-intuitive at first, so we will try to explain it geometrically (for deeper
-physical insight). On the real-imaginary 2D plane (like the left hand plot
-in each box of @ref{iandtime}), multiplying a number by @mymath{i} can be
-interpreted as rotating the point by @mymath{90} degrees (for example the
-value @mymath{3} on the real axis becomes @mymath{3i} on the imaginary
-axis). On the other hand,
-@mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, therefore,
-defining @mymath{m\equiv nu}, we get:
+Leonhard Euler@footnote{Other forms of this equation were known before Euler.
+For example in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 
-- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
+In 1714 A.D., Roger Cotes (1682 -- 1716 A.D. a colleague of Newton who 
proofread the second edition of Principia) showed that: 
@mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 -- 1783 A.D.)  showed that the 
complex exponential (@mymath{e^{iv}} where @mymath{v} is real) is periodic and 
can be written as: @mymath{e^{iv}=\cos{v}+isin{v}}.
+Therefore @mymath{e^{iv+2\pi}=e^{iv}}.
+Later, Caspar Wessel (mathematician and cartographer 1745 -- 1818 A.D.)  
showed how complex numbers can be displayed as vectors on a plane.
+Euler's identity might seem counter intuitive at first, so we will try to 
explain it geometrically (for deeper physical insight).
+On the real-imaginary 2D plane (like the left hand plot in each box of 
@ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as 
rotating the point by @mymath{90} degrees (for example the value @mymath{3} on 
the real axis becomes @mymath{3i} on the imaginary axis).
+On the other hand, @mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, 
therefore, defining @mymath{m\equiv nu}, we get:
 
 @dispmath{e^{u}=\lim_{n\rightarrow\infty}\left(1+{1\over n}\right)^{nu}
                =\lim_{n\rightarrow\infty}\left(1+{u\over nu}\right)^{nu}
                =\lim_{m\rightarrow\infty}\left(1+{u\over m}\right)^{m}}
 
 @noindent
-Taking @mymath{u\equiv iv} the result can be written as a generic complex
-number (a function of @mymath{v}):
+Taking @mymath{u\equiv iv} the result can be written as a generic complex 
number (a function of @mymath{v}):
 
 @dispmath{e^{iv}=\lim_{m\rightarrow\infty}\left(1+i{v\over
                 m}\right)^{m}=a(v)+ib(v)}
 
 @noindent
-For @mymath{v=\pi}, a nice geometric animation of going to the limit can be
-seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on
-Wikipedia}. We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while
-@mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous
-@mymath{e^{i\pi}=-1} equation. The final value is the real number
-@mymath{-1}, however the distance of the polygon points traversed as
-@mymath{m\rightarrow\infty} is half the circumference of a circle or
-@mymath{\pi}, showing how @mymath{v} in the equation above can be
-interpreted as an angle in units of radians and therefore how
-@mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.
-
-Since @mymath{e^{iv}} is periodic (let's assume with a period of
-@mymath{T}), it is more clear to write it as @mymath{v\equiv{2{\pi}n\over
-T}t} (where @mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over
-T}t}}. The advantage of this notation is that the period (@mymath{T}) is
-clearly visible and the frequency (@mymath{2{\pi}n \over T}, in units of
-1/cycle) is defined through the integer @mymath{n}. In this notation,
-@mymath{t} is in units of ``cycle''s.
-
-As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each
-constituting frequency, we need a respective `magnitude' or the radius of
-the circle in order to accurately approximate the desired 1D function. The
-concepts of ``period'' and ``frequency'' are relatively easy to grasp when
-using temporal units like time because this is how we define them in
-every-day life. However, in an image (astronomical data), we are dealing
-with spatial units like distance. Therefore, by one ``period'' we mean the
-@emph{distance} at which the signal is identical and frequency is defined
-as the inverse of that spatial ``period''.  The complex circle of
-@ref{iandtime} can be thought of the Moon rotating about Earth which is
-rotating around the Sun; so the ``Real (signal)'' axis shows the Moon's
-position as seen by a distant observer on the Sun as time goes by.  Because
-of the scalar (not having any direction or vector) nature of time,
-@ref{iandtime} is easier to understand in units of time. When thinking
-about spatial units, mentally replace the ``Time (sec)'' axis with
-``Distance (meters)''. Because length has direction and is a vector,
-visualizing the rotation of the imaginary circle and the advance along the
-``Distance (meters)'' axis is not as simple as temporal units like time.
+For @mymath{v=\pi}, a nice geometric animation of going to the limit can be 
seen @url{https://commons.wikimedia.org/wiki/File:ExpIPi.gif, on Wikipedia}.
+We see that @mymath{\lim_{m\rightarrow\infty}a(\pi)=-1}, while 
@mymath{\lim_{m\rightarrow\infty}b(\pi)=0}, which gives the famous 
@mymath{e^{i\pi}=-1} equation.
+The final value is the real number @mymath{-1}, however the distance of the 
polygon points traversed as @mymath{m\rightarrow\infty} is half the 
circumference of a circle or @mymath{\pi}, showing how @mymath{v} in the 
equation above can be interpreted as an angle in units of radians and therefore 
how @mymath{a(v)=cos(v)} and @mymath{b(v)=sin(v)}.
+
+Since @mymath{e^{iv}} is periodic (let's assume with a period of @mymath{T}), 
it is more clear to write it as @mymath{v\equiv{2{\pi}n\over T}t} (where 
@mymath{n} is an integer), so @mymath{e^{iv}=e^{i{2{\pi}n\over T}t}}.
+The advantage of this notation is that the period (@mymath{T}) is clearly 
visible and the frequency (@mymath{2{\pi}n \over T}, in units of 1/cycle) is 
defined through the integer @mymath{n}.
+In this notation, @mymath{t} is in units of ``cycle''s.
+
+As we see from the examples in @ref{epicycle} and @ref{iandtime}, for each 
constituting frequency, we need a respective `magnitude' or the radius of the 
circle in order to accurately approximate the desired 1D function.
+The concepts of ``period'' and ``frequency'' are relatively easy to grasp when 
using temporal units like time because this is how we define them in every-day 
life.
+However, in an image (astronomical data), we are dealing with spatial units 
like distance.
+Therefore, by one ``period'' we mean the @emph{distance} at which the signal 
is identical and frequency is defined as the inverse of that spatial ``period''.
+The complex circle of @ref{iandtime} can be thought of the Moon rotating about 
Earth which is rotating around the Sun; so the ``Real (signal)'' axis shows the 
Moon's position as seen by a distant observer on the Sun as time goes by.
+Because of the scalar (not having any direction or vector) nature of time, 
@ref{iandtime} is easier to understand in units of time.
+When thinking about spatial units, mentally replace the ``Time (sec)'' axis 
with ``Distance (meters)''.
+Because length has direction and is a vector, visualizing the rotation of the 
imaginary circle and the advance along the ``Distance (meters)'' axis is not as 
simple as temporal units like time.
 
 @float Figure,iandtime
-@image{gnuastro-figures/iandtime, 15.2cm, , } @caption{Relation
-between the real (signal), imaginary (@mymath{i\equiv\sqrt{-1}}) and
-time axes at two snapshots of time.}
+@image{gnuastro-figures/iandtime, 15.2cm, , }
+@caption{Relation between the real (signal), imaginary 
(@mymath{i\equiv\sqrt{-1}}) and time axes at two snapshots of time.}
 @end float
 
 
@@ -14352,62 +10576,42 @@ time axes at two snapshots of time.}
 
 @node Fourier series, Fourier transform, Circles and the complex plane, 
Frequency domain and Fourier operations
 @subsubsection Fourier series
-In astronomical images, our variable (brightness, or number of
-photo-electrons, or signal to be more generic) is recorded over the 2D
-spatial surface of a camera pixel. However to make things easier to
-understand, here we will assume that the signal is recorded in 1D
-(assume one row of the 2D image pixels). Also for this section and the
-next (@ref{Fourier transform}) we will be talking about the signal
-before it is digitized or pixelated. Let's assume that we have the
-continuous function @mymath{f(l)} which is integrable in the interval
-@mymath{[l_0, l_0+L]} (always true in practical cases like
-images). Take @mymath{l_0} as the position of the first pixel in the
-assumed row of the image and @mymath{L} as the width of the image
-along that row. The units of @mymath{l_0} and @mymath{L} can be in any
-spatial units (for example meters) or an angular unit (like radians)
-multiplied by a fixed distance which is more common.
-
-To approximate @mymath{f(l)} over this interval, we need to find a set
-of frequencies and their corresponding `magnitude's (see @ref{Circles
-and the complex plane}). Therefore our aim is to show @mymath{f(l)} as
-the following sum of periodic functions:
+In astronomical images, our variable (brightness, or number of 
photo-electrons, or signal to be more generic) is recorded over the 2D spatial 
surface of a camera pixel.
+However to make things easier to understand, here we will assume that the 
signal is recorded in 1D (assume one row of the 2D image pixels).
+Also for this section and the next (@ref{Fourier transform}) we will be 
talking about the signal before it is digitized or pixelated.
+Let's assume that we have the continuous function @mymath{f(l)} which is 
integrable in the interval @mymath{[l_0, l_0+L]} (always true in practical 
cases like images).
+Take @mymath{l_0} as the position of the first pixel in the assumed row of the 
image and @mymath{L} as the width of the image along that row.
+The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for 
example meters) or an angular unit (like radians) multiplied by a fixed 
distance which is more common.
+
+To approximate @mymath{f(l)} over this interval, we need to find a set of 
frequencies and their corresponding `magnitude's (see @ref{Circles and the 
complex plane}).
+Therefore our aim is to show @mymath{f(l)} as the following sum of periodic 
functions:
 
 
 @dispmath{
 f(l)=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over L}l} }
 
 @noindent
-Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles
-per meters for example) are not arbitrary. They are all integer multiples
-of the fundamental frequency of @mymath{\omega_0=2\pi/L}. Recall that
-@mymath{L} was the length of the signal we want to model. Therefore, we see
-that the smallest possible frequency (or the frequency resolution) in the
-end, depends on the length we observed the signal or @mymath{L}. In the
-case of each dimension on an image, this is the size of the image in the
-respective dimension. The frequencies have been defined in this
-``harmonic'' fashion to insure that the final sum is periodic outside of
-the @mymath{[l_0, l_0+L]} interval too. At this point, you might be
-thinking that the sky is not periodic with the same period as my camera's
-view angle. You are absolutely right! The important thing is that since
-your camera's observed region is the only region we are ``observing'' and
-will be using, the rest of the sky is irrelevant; so we can safely assume
-the sky is periodic outside of it. However, this working assumption will
-haunt us later in @ref{Edges in the frequency domain}.
-
-The frequencies are thus determined by definition. So all we need to
-do is to find the coefficients (@mymath{c_n}), or magnitudes, or radii
-of the circles for each frequency which is identified with the integer
-@mymath{n}.  Fourier's approach was to multiply both sides with a
-fixed term:
+Note that the different frequencies (@mymath{2{\pi}n/L}, in units of cycles 
per meters for example) are not arbitrary.
+They are all integer multiples of the fundamental frequency of 
@mymath{\omega_0=2\pi/L}.
+Recall that @mymath{L} was the length of the signal we want to model.
+Therefore, we see that the smallest possible frequency (or the frequency 
resolution) in the end, depends on the length we observed the signal or 
@mymath{L}.
+In the case of each dimension on an image, this is the size of the image in 
the respective dimension.
+The frequencies have been defined in this ``harmonic'' fashion to insure that 
the final sum is periodic outside of the @mymath{[l_0, l_0+L]} interval too.
+At this point, you might be thinking that the sky is not periodic with the 
same period as my camera's view angle.
+You are absolutely right! The important thing is that since your camera's 
observed region is the only region we are ``observing'' and will be using, the 
rest of the sky is irrelevant; so we can safely assume the sky is periodic 
outside of it.
+However, this working assumption will haunt us later in @ref{Edges in the 
frequency domain}.
+
+The frequencies are thus determined by definition.
+So all we need to do is to find the coefficients (@mymath{c_n}), or 
magnitudes, or radii of the circles for each frequency which is identified with 
the integer @mymath{n}.
+Fourier's approach was to multiply both sides with a fixed term:
 
 @dispmath{
 f(l)e^{-i{2{\pi}m\over 
L}l}=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}(n-m)\over L}l}
 }
 
 @noindent
-where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and
-set the exponential to positive, but this is more clear.}. We can then
-integrate both sides over the observation period:
+where @mymath{m>0}@footnote{ We could have assumed @mymath{m<0} and set the 
exponential to positive, but this is more clear.}.
+We can then integrate both sides over the observation period:
 
 @dispmath{
 \int_{l_0}^{l_0+L}f(l)e^{-i{2{\pi}m\over L}l}dl
@@ -14415,22 +10619,19 @@ integrate both sides over the observation period:
 }
 
 @noindent
-Both @mymath{n} and @mymath{m} are positive integers. Also, we know
-that a complex exponential is periodic so after one period
-(@mymath{L}) it comes back to its starting point. Therefore
-@mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
-However, when @mymath{k=0}, this integral becomes:
-@mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}. Hence since
-the integral will be zero for all @mymath{n{\neq}m}, we get:
+Both @mymath{n} and @mymath{m} are positive integers.
+Also, we know that a complex exponential is periodic so after one period 
(@mymath{L}) it comes back to its starting point.
+Therefore @mymath{\int_{l_0}^{l_0+L}e^{2{\pi}k/L}dl=0} for any @mymath{k>0}.
+However, when @mymath{k=0}, this integral becomes: 
@mymath{\int_{l_0}^{l_0+T}e^0dt=\int_{l_0}^{l_0+T}dt=T}.
+Hence since the integral will be zero for all @mymath{n{\neq}m}, we get:
 
 @dispmath{
 
\displaystyle\sum_{n=-\infty}^{\infty}c_n\int_{l_0}^{l_0+T}e^{i{2{\pi}(n-m)\over
 L}l}dl=Lc_m }
 
 @noindent
-The origin of the axis is fundamentally an arbitrary position. So
-let's set it to the start of the image such that @mymath{l_0=0}. So we
-can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within
-@mymath{f(l)} through the relation:
+The origin of the axis is fundamentally an arbitrary position.
+So let's set it to the start of the image such that @mymath{l_0=0}.
+So we can find the ``magnitude'' of the frequency @mymath{2{\pi}m/L} within 
@mymath{f(l)} through the relation:
 
 @dispmath{ c_m={1\over L}\int_{0}^{L}f(l)e^{-i{2{\pi}m\over L}l}dl }
 
@@ -14438,18 +10639,11 @@ can find the ``magnitude'' of the frequency 
@mymath{2{\pi}m/L} within
 
 @node Fourier transform, Dirac delta and comb, Fourier series, Frequency 
domain and Fourier operations
 @subsubsection Fourier transform
-In @ref{Fourier series}, we had to assume that the function is
-periodic outside of the desired interval with a period of
-@mymath{L}. Therefore, assuming that @mymath{L\rightarrow\infty} will
-allow us to work with any function. However, with this approximation,
-the fundamental frequency (@mymath{\omega_0}) or the frequency
-resolution that we discussed in @ref{Fourier series} will tend to
-zero: @mymath{\omega_0\rightarrow0}. In the equation to find
-@mymath{c_m}, every @mymath{m} represented a frequency (multiple of
-@mymath{\omega_0}) and the integration on @mymath{l} removes the
-dependence of the right side of the equation on @mymath{l}, making it
-only a function of @mymath{m} or frequency. Let's define the following
-two variables:
+In @ref{Fourier series}, we had to assume that the function is periodic 
outside of the desired interval with a period of @mymath{L}.
+Therefore, assuming that @mymath{L\rightarrow\infty} will allow us to work 
with any function.
+However, with this approximation, the fundamental frequency 
(@mymath{\omega_0}) or the frequency resolution that we discussed in 
@ref{Fourier series} will tend to zero: @mymath{\omega_0\rightarrow0}.
+In the equation to find @mymath{c_m}, every @mymath{m} represented a frequency 
(multiple of @mymath{\omega_0}) and the integration on @mymath{l} removes the 
dependence of the right side of the equation on @mymath{l}, making it only a 
function of @mymath{m} or frequency.
+Let's define the following two variables:
 
 @dispmath{\omega{\equiv}m\omega_0={2{\pi}m\over L}}
 
@@ -14459,22 +10653,13 @@ two variables:
 The equation to find the coefficients of each frequency in
 @ref{Fourier series} thus becomes:
 
-@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.  }
+@dispmath{ F(\omega)=\int_{-\infty}^{\infty}f(l)e^{-i{\omega}l}dl.}
 
 @noindent
-The function @mymath{F(\omega)} is thus the @emph{Fourier transform}
-of @mymath{f(l)} in the frequency domain. So through this
-transformation, we can find (analyze) the magnitudes of the
-constituting frequencies or the value in the frequency
-space@footnote{As we discussed before, this `magnitude' can be
-interpreted as the radius of the circle rotating at this frequency in
-the epicyclic interpretation of the Fourier series, see @ref{epicycle}
-and @ref{iandtime}.}  of our spatial input function. The great thing
-is that we can also do the reverse and later synthesize the input
-function from its Fourier transform. Let's do it: with the
-approximations above, multiply the right side of the definition of the
-Fourier Series (@ref{Fourier series}) with
-@mymath{1=L/L=({\omega_0}L)/(2\pi)}:
+The function @mymath{F(\omega)} is thus the @emph{Fourier transform} of 
@mymath{f(l)} in the frequency domain.
+So through this transformation, we can find (analyze) the magnitudes of the 
constituting frequencies or the value in the frequency space@footnote{As we 
discussed before, this `magnitude' can be interpreted as the radius of the 
circle rotating at this frequency in the epicyclic interpretation of the 
Fourier series, see @ref{epicycle} and @ref{iandtime}.}  of our spatial input 
function.
+The great thing is that we can also do the reverse and later synthesize the 
input function from its Fourier transform.
+Let's do it: with the approximations above, multiply the right side of the 
definition of the Fourier Series (@ref{Fourier series}) with 
@mymath{1=L/L=({\omega_0}L)/(2\pi)}:
 
 @dispmath{ f(l)={1\over
 2\pi}\displaystyle\sum_{n=-\infty}^{\infty}Lc_ne^{{2{\pi}in\over
@@ -14484,53 +10669,40 @@ L}l}\omega_0={1\over
 
 
 @noindent
-To find the right most side of this equation, we renamed
-@mymath{\omega_0} as @mymath{\Delta\omega} because it was our
-resolution, @mymath{2{\pi}n/L} was written as @mymath{\omega} and
-finally, @mymath{Lc_n} was written as @mymath{F(\omega)} as we defined
-above. Now, as @mymath{L\rightarrow\infty},
-@mymath{\Delta\omega\rightarrow0} so we can write:
+To find the right most side of this equation, we renamed @mymath{\omega_0} as 
@mymath{\Delta\omega} because it was our resolution, @mymath{2{\pi}n/L} was 
written as @mymath{\omega} and finally, @mymath{Lc_n} was written as 
@mymath{F(\omega)} as we defined above.
+Now, as @mymath{L\rightarrow\infty}, @mymath{\Delta\omega\rightarrow0} so we 
can write:
 
 @dispmath{ f(l)={1\over
   2\pi}\int_{-\infty}^{\infty}F(\omega)e^{i{\omega}l}d\omega }
 
-Together, these two equations provide us with a very powerful set of
-tools that we can use to process (analyze) and recreate (synthesize)
-the input signal. Through the first equation, we can break up our
-input function into its constituent frequencies and analyze it, hence
-it is also known as @emph{analysis}. Using the second equation, we can
-synthesize or make the input function from the known frequencies and
-their magnitudes. Thus it is known as @emph{synthesis}. Here, we
-symbolize the Fourier transform (analysis) and its inverse (synthesis)
-of a function @mymath{f(l)} and its Fourier Transform
-@mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.
+Together, these two equations provide us with a very powerful set of tools 
that we can use to process (analyze) and recreate (synthesize) the input signal.
+Through the first equation, we can break up our input function into its 
constituent frequencies and analyze it, hence it is also known as 
@emph{analysis}.
+Using the second equation, we can synthesize or make the input function from 
the known frequencies and their magnitudes.
+Thus it is known as @emph{synthesis}.
+Here, we symbolize the Fourier transform (analysis) and its inverse 
(synthesis) of a function @mymath{f(l)} and its Fourier Transform 
@mymath{F(\omega)} as @mymath{{\cal F}[f]} and @mymath{{\cal F}^{-1}[F]}.
 
 
 @node Dirac delta and comb, Convolution theorem, Fourier transform, Frequency 
domain and Fourier operations
 @subsubsection Dirac delta and comb
 
-The Dirac @mymath{\delta} (delta) function (also known as an impulse)
-is the way that we convert a continuous function into a discrete
-one. It is defined to satisfy the following integral:
+The Dirac @mymath{\delta} (delta) function (also known as an impulse) is the 
way that we convert a continuous function into a discrete one.
+It is defined to satisfy the following integral:
 
 @dispmath{\int_{-\infty}^{\infty}\delta(l)dl=1}
 
 @noindent
-When integrated with another function, it gives that function's value
-at @mymath{l=0}:
+When integrated with another function, it gives that function's value at 
@mymath{l=0}:
 
 @dispmath{\int_{-\infty}^{\infty}f(l)\delta(l)dt=f(0)}
 
 @noindent
-An impulse positioned at another point (say @mymath{l_0}) is written
-as @mymath{\delta(l-l_0)}:
+An impulse positioned at another point (say @mymath{l_0}) is written as 
@mymath{\delta(l-l_0)}:
 
 @dispmath{\int_{-\infty}^{\infty}f(l)\delta(l-l_0)dt=f(l_0)}
 
 @noindent
-The Dirac @mymath{\delta} function also operates similarly if we use
-summations instead of integrals. The Fourier transform of the delta
-function is:
+The Dirac @mymath{\delta} function also operates similarly if we use 
summations instead of integrals.
+The Fourier transform of the delta function is:
 
 @dispmath{{\cal 
F}[\delta(l)]=\int_{-\infty}^{\infty}\delta(l)e^{-i{\omega}l}dl=e^{-i{\omega}0}=1}
 
@@ -14546,18 +10718,12 @@ impulses separated by @mymath{P}:
 
 
 @noindent
-@mymath{P} is chosen to represent ``pixel width'' later in
-@ref{Sampling theorem}. Therefore the Dirac comb is periodic with a
-period of @mymath{P}. We have intentionally used a different name for
-the period of the Dirac comb compared to the input signal's length of
-observation that we showed with @mymath{L} in @ref{Fourier
-series}. This difference is highlighted here to avoid confusion later
-when these two periods are needed together in @ref{Discrete Fourier
-transform}. The Fourier transform of the Dirac comb will be necessary
-in @ref{Sampling theorem}, so let's derive it. By its definition, it
-is periodic, with a period of @mymath{P}, so the Fourier coefficients
-of its Fourier Series (@ref{Fourier series}) can be calculated within
-one period:
+@mymath{P} is chosen to represent ``pixel width'' later in @ref{Sampling 
theorem}.
+Therefore the Dirac comb is periodic with a period of @mymath{P}.
+We have intentionally used a different name for the period of the Dirac comb 
compared to the input signal's length of observation that we showed with 
@mymath{L} in @ref{Fourier series}.
+This difference is highlighted here to avoid confusion later when these two 
periods are needed together in @ref{Discrete Fourier transform}.
+The Fourier transform of the Dirac comb will be necessary in @ref{Sampling 
theorem}, so let's derive it.
+By its definition, it is periodic, with a period of @mymath{P}, so the Fourier 
coefficients of its Fourier Series (@ref{Fourier series}) can be calculated 
within one period:
 
 @dispmath{{\rm 
III}_P=\displaystyle\sum_{n=-\infty}^{\infty}c_ne^{i{2{\pi}n\over
 P}l}}
@@ -14581,15 +10747,9 @@ So we can write the Fourier transform of the Dirac 
comb as:
 
 
 @noindent
-In the last step, we used the fact that the complex exponential is a
-periodic function, that @mymath{n} is an integer and that as we
-defined in @ref{Fourier transform}, @mymath{\omega{\equiv}m\omega_0},
-where @mymath{m} was an integer. The integral will be zero for any
-@mymath{\omega} that is not equal to @mymath{2{\pi}n/P}, a more
-complete explanation can be seen in @ref{Fourier series}. Therefore,
-while in the spatial domain the impulses had spacing of @mymath{P}
-(meters for example), in the frequency space, the spacing between the
-different impulses are @mymath{2\pi/P} cycles per meters.
+In the last step, we used the fact that the complex exponential is a periodic 
function, that @mymath{n} is an integer and that as we defined in @ref{Fourier 
transform}, @mymath{\omega{\equiv}m\omega_0}, where @mymath{m} was an integer.
+The integral will be zero for any @mymath{\omega} that is not equal to 
@mymath{2{\pi}n/P}, a more complete explanation can be seen in @ref{Fourier 
series}.
+Therefore, while in the spatial domain the impulses had spacing of @mymath{P} 
(meters for example), in the frequency space, the spacing between the different 
impulses are @mymath{2\pi/P} cycles per meters.
 
 
 @node Convolution theorem, Sampling theorem, Dirac delta and comb, Frequency 
domain and Fourier operations
@@ -14603,9 +10763,8 @@ 
c(l)\equiv[f{\ast}h](l)=\int_{-\infty}^{\infty}f(\tau)h(l-\tau)d\tau
 }
 
 @noindent
-See @ref{Convolution process} for a more detailed physical (pixel
-based) interpretation of this definition. The Fourier transform of
-convolution (@mymath{C(\omega)}) can be written as:
+See @ref{Convolution process} for a more detailed physical (pixel based) 
interpretation of this definition.
+The Fourier transform of convolution (@mymath{C(\omega)}) can be written as:
 
 @dispmath{
   C(\omega)=\int_{-\infty}^{\infty}[f{\ast}h](l)e^{-i{\omega}l}dl=
@@ -14623,34 +10782,28 @@ becomes:
 }
 
 @noindent
-where @mymath{H(\omega)} is the Fourier transform of
-@mymath{h(l)}. Substituting this result for the inner integral above,
-we get:
+where @mymath{H(\omega)} is the Fourier transform of @mymath{h(l)}.
+Substituting this result for the inner integral above, we get:
 
 @dispmath{
 
C(\omega)=H(\omega)\int_{-\infty}^{\infty}f(\tau)e^{-i{\omega}\tau}d\tau=H(\omega)F(\omega)=F(\omega)H(\omega)
 }
 
 @noindent
-where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}. So
-multiplying the Fourier transform of two functions individually, we
-get the Fourier transform of their convolution. The convolution
-theorem also proves a relation between the convolutions in the
-frequency space. Let's define:
+where @mymath{F(\omega)} is the Fourier transform of @mymath{f(l)}.
+So multiplying the Fourier transform of two functions individually, we get the 
Fourier transform of their convolution.
+The convolution theorem also proves a relation between the convolutions in the 
frequency space. Let's define:
 
 @dispmath{D(\omega){\equiv}F(\omega){\ast}H(\omega)}
 
 @noindent
-Applying the inverse Fourier Transform or synthesis equation
-(@ref{Fourier transform}) to both sides and following the same steps
-above, we get:
+Applying the inverse Fourier Transform or synthesis equation (@ref{Fourier 
transform}) to both sides and following the same steps above, we get:
 
 @dispmath{d(l)=f(l)h(l)}
 
 @noindent
-Where @mymath{d(l)} is the inverse Fourier transform of
-@mymath{D(\omega)}. We can therefore re-write the two equations above
-formally as the convolution theorem:
+Where @mymath{d(l)} is the inverse Fourier transform of @mymath{D(\omega)}.
+We can therefore re-write the two equations above formally as the convolution 
theorem:
 
 @dispmath{
   {\cal F}[f{\ast}h]={\cal F}[f]{\cal F}[h]
@@ -14660,24 +10813,15 @@ formally as the convolution theorem:
   {\cal F}[fh]={\cal F}[f]\ast{\cal F}[h]
 }
 
-Besides its usefulness in blurring an image by convolving it with a
-given kernel, the convolution theorem also enables us to do another
-very useful operation in data analysis: to match the blur (or PSF)
-between two images taken with different telescopes/cameras or under
-different atmospheric conditions. This process is also known as
-de-convolution. Let's take @mymath{f(l)} as the image with a narrower
-PSF (less blurry) and @mymath{c(l)} as the image with a wider PSF
-which appears more blurred. Also let's take @mymath{h(l)} to represent
-the kernel that should be convolved with the sharper image to create
-the more blurry image. Above, we proved the relation between these
-three images through the convolution theorem. But there, we assumed
-that @mymath{f(l)} and @mymath{h(l)} are known (given) and the
-convolved image is desired.
-
-In de-convolution, we have @mymath{f(l)} --the sharper image-- and
-@mymath{f*h(l)} --the more blurry image-- and we want to find the kernel
-@mymath{h(l)}. The solution is a direct result of the convolution
-theorem:
+Besides its usefulness in blurring an image by convolving it with a given 
kernel, the convolution theorem also enables us to do another very useful 
operation in data analysis: to match the blur (or PSF) between two images taken 
with different telescopes/cameras or under different atmospheric conditions.
+This process is also known as de-convolution.
+Let's take @mymath{f(l)} as the image with a narrower PSF (less blurry) and 
@mymath{c(l)} as the image with a wider PSF which appears more blurred.
+Also let's take @mymath{h(l)} to represent the kernel that should be convolved 
with the sharper image to create the more blurry image.
+Above, we proved the relation between these three images through the 
convolution theorem.
+But there, we assumed that @mymath{f(l)} and @mymath{h(l)} are known (given) 
and the convolved image is desired.
+
+In de-convolution, we have @mymath{f(l)} --the sharper image-- and 
@mymath{f*h(l)} --the more blurry image-- and we want to find the kernel 
@mymath{h(l)}.
+The solution is a direct result of the convolution theorem:
 
 @dispmath{
   {\cal F}[h]={{\cal F}[f{\ast}h]\over {\cal F}[f]}
@@ -14692,13 +10836,10 @@ While this works really nice, it has two problems:
 @itemize
 
 @item
-If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier
-transform will not be a number!
+If @mymath{{\cal F}[f]} has any zero values, then the inverse Fourier 
transform will not be a number!
 
 @item
-If there is significant noise in the image, then the high frequencies
-of the noise are going to significantly reduce the quality of the
-final result.
+If there is significant noise in the image, then the high frequencies of the 
noise are going to significantly reduce the quality of the final result.
 
 @end itemize
 
@@ -14708,14 +10849,9 @@ 
algorithm@footnote{@url{https://en.wikipedia.org/wiki/Wiener_deconvolution}}.
 @node Sampling theorem, Discrete Fourier transform, Convolution theorem, 
Frequency domain and Fourier operations
 @subsubsection Sampling theorem
 
-Our mathematical functions are continuous, however, our data
-collecting and measuring tools are discrete. Here we want to give a
-mathematical formulation for digitizing the continuous mathematical
-functions so that later, we can retrieve the continuous function from
-the digitized recorded input. Assuming that we have a continuous
-function @mymath{f(l)}, then we can define @mymath{f_s(l)} as the
-`sampled' @mymath{f(l)} through the Dirac comb (see @ref{Dirac delta
-and comb}):
+Our mathematical functions are continuous, however, our data collecting and 
measuring tools are discrete.
+Here we want to give a mathematical formulation for digitizing the continuous 
mathematical functions so that later, we can retrieve the continuous function 
from the digitized recorded input.
+Assuming that we have a continuous function @mymath{f(l)}, then we can define 
@mymath{f_s(l)} as the `sampled' @mymath{f(l)} through the Dirac comb (see 
@ref{Dirac delta and comb}):
 
 @dispmath{
 f_s(l)=f(l){\rm III}_P=\displaystyle\sum_{n=-\infty}^{\infty}f(l)\delta(l-nP)
@@ -14727,15 +10863,11 @@ image), where @mymath{k} is an integer, can thus be 
represented as:
 
 
@dispmath{f_k=\int_{-\infty}^{\infty}f_s(l)dl=\int_{-\infty}^{\infty}f(l)\delta(l-kP)dt=f(kP)}
 
-Note that in practice, our discrete data points are not found in this
-fashion. Each detector pixel (in an image for example) has an area and
-averages the signal it receives over that area, not a mathematical point as
-the Dirac @mymath{\delta} function defines. However, as long as the
-variation in the signal over one detector pixel is not significant, this
-can be a good approximation. Having put this issue to the side, we can now
-try to find the relation between the Fourier transforms of the un-sampled
-@mymath{f(l)} and the sampled @mymath{f_s(l)}. For a more clear notation,
-let's define:
+Note that in practice, our discrete data points are not found in this fashion.
+Each detector pixel (in an image for example) has an area and averages the 
signal it receives over that area, not a mathematical point as the Dirac 
@mymath{\delta} function defines.
+However, as long as the variation in the signal over one detector pixel is not 
significant, this can be a good approximation.
+Having put this issue to the side, we can now try to find the relation between 
the Fourier transforms of the un-sampled @mymath{f(l)} and the sampled 
@mymath{f_s(l)}.
+For a more clear notation, let's define:
 
 @dispmath{F_s(\omega)\equiv{\cal F}[f_s]}
 
@@ -14759,115 +10891,69 @@ F_s(\omega) &= 
\int_{-\infty}^{\infty}F(\omega)D(\omega-\mu)d\mu \cr
    \omega-{2{\pi}n\over P}\right).\cr }
 }
 
-@mymath{F(\omega)} was only a simple function, see
-@ref{samplingfreq}(left). However, from the sampled Fourier transform
-function we see that @mymath{F_s(\omega)} is the superposition of
-infinite copies of @mymath{F(\omega)} that have been shifted, see
-@ref{samplingfreq}(right). From the equation, it is clear that the
-shift in each copy is @mymath{2\pi/P}.
+@mymath{F(\omega)} was only a simple function, see @ref{samplingfreq}(left).
+However, from the sampled Fourier transform function we see that 
@mymath{F_s(\omega)} is the superposition of infinite copies of 
@mymath{F(\omega)} that have been shifted, see @ref{samplingfreq}(right).
+From the equation, it is clear that the shift in each copy is @mymath{2\pi/P}.
 
 @float Figure,samplingfreq
-@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling
-    causes infinite repetition in the frequency domain. FT is an
-    abbreviation for `Fourier transform'. @mymath{\omega_m} represents
-    the maximum frequency present in the input. @mymath{F(\omega)} is
-    only symmetric on both sides of 0 when the input is real (not
-    complex). In general @mymath{F(\omega)} is complex and thus cannot
-    be simply plotted like this. Here we have assumed a real Gaussian
-    @mymath{f(t)} which has produced a Gaussian @mymath{F(\omega)}.}
+@image{gnuastro-figures/samplingfreq, 15.2cm, , } @caption{Sampling causes 
infinite repetition in the frequency domain.
+FT is an abbreviation for `Fourier transform'.
+@mymath{\omega_m} represents the maximum frequency present in the input.
+@mymath{F(\omega)} is only symmetric on both sides of 0 when the input is real 
(not complex).
+In general @mymath{F(\omega)} is complex and thus cannot be simply plotted 
like this.
+Here we have assumed a real Gaussian @mymath{f(t)} which has produced a 
Gaussian @mymath{F(\omega)}.}
 @end float
 
-The input @mymath{f(l)} can have any distribution of frequencies in
-it. In the example of @ref{samplingfreq}(left), the input consisted of
-a range of frequencies equal to
-@mymath{\Delta\omega=2\omega_m}. Fortunately as
-@ref{samplingfreq}(right) shows, the assumed pixel size (@mymath{P})
-we used to sample this hypothetical function was such that
-@mymath{2\pi/P>\Delta\omega}. The consequence is that each copy of
-@mymath{F(\omega)} has become completely separate from the surrounding
-copies. Such a digitized (sampled) data set is thus called
-@emph{over-sampled}. When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is
-just small enough to finely separate even the largest frequencies in
-the input signal and thus it is known as
-@emph{critically-sampled}. Finally if @mymath{2\pi/P<\Delta\omega} we
-are dealing with an @emph{under-sampled} data set. In an under-sampled
-data set, the separate copies of @mymath{F(\omega)} are going to
-overlap and this will deprive us of recovering high constituent
-frequencies of @mymath{f(l)}. The effects of under-sampling in an
-image with high rates of change (for example a brick wall imaged from
-a distance) can clearly be visually seen and is known as
-@emph{aliasing}.
-
-When the input @mymath{f(l)} is composed of a finite range of
-frequencies, @mymath{f(l)} is known as a @emph{band-limited}
-function. The example in @ref{samplingfreq}(left) was a nice
-demonstration of such a case: for all @mymath{\omega<-\omega_m} or
-@mymath{\omega>\omega_m}, we have @mymath{F(\omega)=0}. Therefore,
-when the input function is band-limited and our detector's pixels are
-placed such that we have critically (or over-) sampled it, then we can
-exactly reproduce the continuous @mymath{f(l)} from the discrete or
-digitized samples. To do that, we just have to isolate one copy of
-@mymath{F(\omega)} from the infinite copies and take its inverse
-Fourier transform.
-
-This ability to exactly reproduce the continuous input from the
-sampled or digitized data leads us to the @emph{sampling theorem}
-which connects the inherent property of the continuous signal (its
-maximum frequency) to that of the detector (the spacing between its
-pixels). The sampling theorem states that the full (continuous) signal
-can be recovered when the pixel size (@mymath{P}) and the maximum
-constituent frequency in the signal (@mymath{\omega_m}) have the
-following relation@footnote{This equation is also shown in some places
-without the @mymath{2\pi}. Whether @mymath{2\pi} is included or not
-depends on how you define the frequency}:
+The input @mymath{f(l)} can have any distribution of frequencies in it.
+In the example of @ref{samplingfreq}(left), the input consisted of a range of 
frequencies equal to @mymath{\Delta\omega=2\omega_m}.
+Fortunately as @ref{samplingfreq}(right) shows, the assumed pixel size 
(@mymath{P}) we used to sample this hypothetical function was such that 
@mymath{2\pi/P>\Delta\omega}.
+The consequence is that each copy of @mymath{F(\omega)} has become completely 
separate from the surrounding copies.
+Such a digitized (sampled) data set is thus called @emph{over-sampled}.
+When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is just small enough to finely 
separate even the largest frequencies in the input signal and thus it is known 
as @emph{critically-sampled}.
+Finally if @mymath{2\pi/P<\Delta\omega} we are dealing with an 
@emph{under-sampled} data set.
+In an under-sampled data set, the separate copies of @mymath{F(\omega)} are 
going to overlap and this will deprive us of recovering high constituent 
frequencies of @mymath{f(l)}.
+The effects of under-sampling in an image with high rates of change (for 
example a brick wall imaged from a distance) can clearly be visually seen and 
is known as @emph{aliasing}.
+
+When the input @mymath{f(l)} is composed of a finite range of frequencies, 
@mymath{f(l)} is known as a @emph{band-limited} function.
+The example in @ref{samplingfreq}(left) was a nice demonstration of such a 
case: for all @mymath{\omega<-\omega_m} or @mymath{\omega>\omega_m}, we have 
@mymath{F(\omega)=0}.
+Therefore, when the input function is band-limited and our detector's pixels 
are placed such that we have critically (or over-) sampled it, then we can 
exactly reproduce the continuous @mymath{f(l)} from the discrete or digitized 
samples.
+To do that, we just have to isolate one copy of @mymath{F(\omega)} from the 
infinite copies and take its inverse Fourier transform.
+
+This ability to exactly reproduce the continuous input from the sampled or 
digitized data leads us to the @emph{sampling theorem} which connects the 
inherent property of the continuous signal (its maximum frequency) to that of 
the detector (the spacing between its pixels).
+The sampling theorem states that the full (continuous) signal can be recovered 
when the pixel size (@mymath{P}) and the maximum constituent frequency in the 
signal (@mymath{\omega_m}) have the following relation@footnote{This equation 
is also shown in some places without the @mymath{2\pi}.
+Whether @mymath{2\pi} is included or not depends on how you define the 
frequency}:
 
 @dispmath{{2\pi\over P}>2\omega_m}
 
 @noindent
-This relation was first formulated by Harry Nyquist (1889 -- 1976
-A.D.) in 1928 and formally proved in 1949 by Claude E. Shannon (1916
--- 2001 A.D.) in what is now known as the Nyquist-Shannon sampling
-theorem. In signal processing, the signal is produced (synthesized) by
-a transmitter and is received and de-coded (analyzed) by a
-receiver. Therefore producing a band-limited signal is necessary.
-
-In astronomy, we do not produce the shapes of our targets, we are only
-observers. Galaxies can have any shape and size, therefore ideally,
-our signal is not band-limited. However, since we are always confined
-to observing through an aperture, the aperture will cause a point
-source (for which @mymath{\omega_m=\infty}) to be spread over several
-pixels. This spread is quantitatively known as the point spread
-function or PSF. This spread does blur the image which is undesirable;
-however, for this analysis it produces the positive outcome that there
-will be a finite @mymath{\omega_m}. Though we should caution that any
-detector will have noise which will add lots of very high frequency
-(ideally infinite) changes between the pixels. However, the
-coefficients of those noise frequencies are usually exceedingly small.
+This relation was first formulated by Harry Nyquist (1889 -- 1976 A.D.) in 
1928 and formally proved in 1949 by Claude E. Shannon (1916 -- 2001 A.D.) in 
what is now known as the Nyquist-Shannon sampling theorem.
+In signal processing, the signal is produced (synthesized) by a transmitter 
and is received and de-coded (analyzed) by a receiver.
+Therefore producing a band-limited signal is necessary.
+
+In astronomy, we do not produce the shapes of our targets, we are only 
observers.
+Galaxies can have any shape and size, therefore ideally, our signal is not 
band-limited.
+However, since we are always confined to observing through an aperture, the 
aperture will cause a point source (for which @mymath{\omega_m=\infty}) to be 
spread over several pixels.
+This spread is quantitatively known as the point spread function or PSF.
+This spread does blur the image which is undesirable; however, for this 
analysis it produces the positive outcome that there will be a finite 
@mymath{\omega_m}.
+Though we should caution that any detector will have noise which will add lots 
of very high frequency (ideally infinite) changes between the pixels.
+However, the coefficients of those noise frequencies are usually exceedingly 
small.
 
 @node Discrete Fourier transform, Fourier operations in two dimensions, 
Sampling theorem, Frequency domain and Fourier operations
 @subsubsection Discrete Fourier transform
 
-As we have stated several times so far, the input image is a
-digitized, pixelated or discrete array of values (@mymath{f_s(l)}, see
-@ref{Sampling theorem}). The input is not a continuous function. Also,
-all our numerical calculations can only be done on a sampled, or
-discrete Fourier transform. Note that @mymath{F_s(\omega)} is not
-discrete, it is continuous. One way would be to find the analytic
-@mymath{F_s(\omega)}, then sample it at any desired
-``freq-pixel''@footnote{We are using the made-up word ``freq-pixel''
-so they are not confused with spatial domain ``pixels''.}
-spacing. However, this process would involve two steps of operations
-and computers in particular are not too good at analytic operations
-for the first step. So here, we will derive a method to directly find
-the `freq-pixel'ated @mymath{F_s(\omega)} from the pixelated
-@mymath{f_s(l)}. Let's start with the definition of the Fourier
-transform (see @ref{Fourier transform}):
+As we have stated several times so far, the input image is a digitized, 
pixelated or discrete array of values (@mymath{f_s(l)}, see @ref{Sampling 
theorem}).
+The input is not a continuous function.
+Also, all our numerical calculations can only be done on a sampled, or 
discrete Fourier transform.
+Note that @mymath{F_s(\omega)} is not discrete, it is continuous.
+One way would be to find the analytic @mymath{F_s(\omega)}, then sample it at 
any desired ``freq-pixel''@footnote{We are using the made-up word 
``freq-pixel'' so they are not confused with spatial domain ``pixels''.} 
spacing.
+However, this process would involve two steps of operations and computers in 
particular are not too good at analytic operations for the first step.
+So here, we will derive a method to directly find the `freq-pixel'ated 
@mymath{F_s(\omega)} from the pixelated @mymath{f_s(l)}.
+Let's start with the definition of the Fourier transform (see @ref{Fourier 
transform}):
 
 @dispmath{F_s(\omega)=\int_{-\infty}^{\infty}f_s(l)e^{-i{\omega}l}dl }
 
 @noindent
-From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead
-of @mymath{n}) we get:
+From the definition of @mymath{f_s(\omega)} (using @mymath{x} instead of 
@mymath{n}) we get:
 
 @dispmath{
 \eqalign{
@@ -14879,14 +10965,11 @@ of @mymath{n}) we get:
 }
 
 @noindent
-Where @mymath{f_x} is the value of @mymath{f(l)} on the point
-@mymath{x} or the value of the @mymath{x}th pixel. As shown in
-@ref{Sampling theorem} this function is infinitely periodic with a
-period of @mymath{2\pi/P}. So all we need is the values within one
-period: @mymath{0<\omega<2\pi/P}, see @ref{samplingfreq}. We want
-@mymath{X} samples within this interval, so the frequency difference
-between each frequency sample or freq-pixel is @mymath{1/XP}. Hence we
-will evaluate the equation above on the points at:
+Where @mymath{f_x} is the value of @mymath{f(l)} on the point @mymath{x} or 
the value of the @mymath{x}th pixel.
+As shown in @ref{Sampling theorem} this function is infinitely periodic with a 
period of @mymath{2\pi/P}.
+So all we need is the values within one period: @mymath{0<\omega<2\pi/P}, see 
@ref{samplingfreq}.
+We want @mymath{X} samples within this interval, so the frequency difference 
between each frequency sample or freq-pixel is @mymath{1/XP}.
+Hence we will evaluate the equation above on the points at:
 
 @dispmath{\omega={u\over XP} \quad\quad u = 0, 1, 2, ..., X-1}
 
@@ -14897,39 +10980,23 @@ domain is:
 @dispmath{F_u=\displaystyle\sum_{x=0}^{X-1} f_xe^{-i{ux\over X}} }
 
 @noindent
-Therefore, we see that for each freq-pixel in the frequency domain, we
-are going to need all the pixels in the spatial domain@footnote{So
-even if one pixel is a blank pixel (see @ref{Blank pixels}), all the
-pixels in the frequency domain will also be blank.}. If the input
-(spatial) pixel row is also @mymath{X} pixels wide, then we can
-exactly recover the @mymath{x}th pixel with the following summation:
+Therefore, we see that for each freq-pixel in the frequency domain, we are 
going to need all the pixels in the spatial domain@footnote{So even if one 
pixel is a blank pixel (see @ref{Blank pixels}), all the pixels in the 
frequency domain will also be blank.}.
+If the input (spatial) pixel row is also @mymath{X} pixels wide, then we can 
exactly recover the @mymath{x}th pixel with the following summation:
 
 @dispmath{f_x={1\over X}\displaystyle\sum_{u=0}^{X-1} F_ue^{i{ux\over X}} }
 
-When the input pixel row (we are still only working on 1D data) has
-@mymath{X} pixels, then it is @mymath{L=XP} spatial units
-wide. @mymath{L}, or the length of the input data was defined in
-@ref{Fourier series} and @mymath{P} or the space between the pixels in
-the input was defined in @ref{Dirac delta and comb}. As we saw in
-@ref{Sampling theorem}, the input (spatial) pixel spacing (@mymath{P})
-specifies the range of frequencies that can be studied and in
-@ref{Fourier series} we saw that the length of the (spatial) input,
-(@mymath{L}) determines the resolution (or size of the freq-pixels) in
-our discrete Fourier transformed image. Both result from the fact that
-the frequency domain is the inverse of the spatial domain.
+When the input pixel row (we are still only working on 1D data) has @mymath{X} 
pixels, then it is @mymath{L=XP} spatial units wide.
+@mymath{L}, or the length of the input data was defined in @ref{Fourier 
series} and @mymath{P} or the space between the pixels in the input was defined 
in @ref{Dirac delta and comb}.
+As we saw in @ref{Sampling theorem}, the input (spatial) pixel spacing 
(@mymath{P}) specifies the range of frequencies that can be studied and in 
@ref{Fourier series} we saw that the length of the (spatial) input, 
(@mymath{L}) determines the resolution (or size of the freq-pixels) in our 
discrete Fourier transformed image.
+Both result from the fact that the frequency domain is the inverse of the 
spatial domain.
 
 @node Fourier operations in two dimensions, Edges in the frequency domain, 
Discrete Fourier transform, Frequency domain and Fourier operations
 @subsubsection Fourier operations in two dimensions
 
-Once all the relations in the previous sections have been clearly
-understood in one dimension, it is very easy to generalize them to two
-or even more dimensions since each dimension is by definition
-independent. Previously we defined @mymath{l} as the continuous
-variable in 1D and the inverse of the period in its direction to be
-@mymath{\omega}. Let's show the second spatial direction with
-@mymath{m} the inverse of the period in the second dimension with
-@mymath{\nu}. The Fourier transform in 2D (see @ref{Fourier
-transform}) can be written as:
+Once all the relations in the previous sections have been clearly understood 
in one dimension, it is very easy to generalize them to two or even more 
dimensions since each dimension is by definition independent.
+Previously we defined @mymath{l} as the continuous variable in 1D and the 
inverse of the period in its direction to be @mymath{\omega}.
+Let's show the second spatial direction with @mymath{m} the inverse of the 
period in the second dimension with @mymath{\nu}.
+The Fourier transform in 2D (see @ref{Fourier transform}) can be written as:
 
 @dispmath{F(\omega, \nu)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
 f(l, m)e^{-i({\omega}l+{\nu}m)}dl}
@@ -14937,36 +11004,25 @@ f(l, m)e^{-i({\omega}l+{\nu}m)}dl}
 @dispmath{f(l, m)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}
 F(\omega, \nu)e^{i({\omega}l+{\nu}m)}dl}
 
-The 2D Dirac @mymath{\delta(l,m)} is non-zero only when
-@mymath{l=m=0}.  The 2D Dirac comb (or Dirac brush! See @ref{Dirac
-delta and comb}) can be written in units of the 2D Dirac
-@mymath{\delta}. For most image detectors, the sides of a pixel are
-equal in both dimensions. So @mymath{P} remains unchanged, if a
-specific device is used which has non-square pixels, then for each
-dimension a different value should be used.
+The 2D Dirac @mymath{\delta(l,m)} is non-zero only when @mymath{l=m=0}.
+The 2D Dirac comb (or Dirac brush! See @ref{Dirac delta and comb}) can be 
written in units of the 2D Dirac @mymath{\delta}.
+For most image detectors, the sides of a pixel are equal in both dimensions.
+So @mymath{P} remains unchanged, if a specific device is used which has 
non-square pixels, then for each dimension a different value should be used.
 
 @dispmath{{\rm III}_P(l, m)\equiv\displaystyle\sum_{j=-\infty}^{\infty}
 \displaystyle\sum_{k=-\infty}^{\infty}
 \delta(l-jP, m-kP) }
 
-The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus
-very easily derived as before since the frequencies in each dimension are
-independent. Let's take @mymath{\nu_m} as the maximum frequency along the
-second dimension. Therefore the two dimensional sampling theorem says that
-a 2D band-limited function can be recovered when the following conditions
-hold@footnote{If the pixels are not a square, then each dimension has to
-use the respective pixel size, but since most detectors have square pixels,
-we assume so here too}:
+The Two dimensional Sampling theorem (see @ref{Sampling theorem}) is thus very 
easily derived as before since the frequencies in each dimension are 
independent.
+Let's take @mymath{\nu_m} as the maximum frequency along the second dimension.
+Therefore the two dimensional sampling theorem says that a 2D band-limited 
function can be recovered when the following conditions hold@footnote{If the 
pixels are not a square, then each dimension has to use the respective pixel 
size, but since most detectors have square pixels, we assume so here too}:
 
 @dispmath{ {2\pi\over P} > 2\omega_m \quad\quad\quad {\rm and}
 \quad\quad\quad {2\pi\over P} > 2\nu_m}
 
-Finally, let's represent the pixel counter on the second dimension in
-the spatial and frequency domains with @mymath{y} and @mymath{v}
-respectively. Also let's assume that the input image has @mymath{Y}
-pixels on the second dimension. Then the two dimensional discrete
-Fourier transform and its inverse (see @ref{Discrete Fourier
-transform}) can be written as:
+Finally, let's represent the pixel counter on the second dimension in the 
spatial and frequency domains with @mymath{y} and @mymath{v} respectively.
+Also let's assume that the input image has @mymath{Y} pixels on the second 
dimension.
+Then the two dimensional discrete Fourier transform and its inverse (see 
@ref{Discrete Fourier transform}) can be written as:
 
 @dispmath{F_{u,v}=\displaystyle\sum_{x=0}^{X-1}\displaystyle\sum_{y=0}^{Y-1}
 f_{x,y}e^{-i({ux\over X}+{vy\over Y})} }
@@ -14978,83 +11034,50 @@ F_{u,v}e^{i({ux\over X}+{vy\over Y})} }
 @node Edges in the frequency domain,  , Fourier operations in two dimensions, 
Frequency domain and Fourier operations
 @subsubsection Edges in the frequency domain
 
-With a good grasp of the frequency domain, we can revisit the problem
-of convolution on the image edges, see @ref{Edges in the spatial
-domain}.  When we apply the convolution theorem (see @ref{Convolution
-theorem}) to convolve an image, we first take the discrete Fourier
-transforms (DFT, @ref{Discrete Fourier transform}) of both the input
-image and the kernel, then we multiply them with each other and then
-take the inverse DFT to construct the convolved image. Of course, in
-order to multiply them with each other in the frequency domain, the
-two images have to be the same size, so let's assume that we pad the
-kernel (it is usually smaller than the input image) with zero valued
-pixels in both dimensions so it becomes the same size as the input
-image before the DFT.
-
-Having multiplied the two DFTs, we now apply the inverse DFT which is
-where the problem is usually created. If the DFT of the kernel only
-had values of 1 (unrealistic condition!) then there would be no
-problem and the inverse DFT of the multiplication would be identical
-with the input. However in real situations, the kernel's DFT has a
-maximum of 1 (because the sum of the kernel has to be one, see
-@ref{Convolution process}) and decreases something like the
-hypothetical profile of @ref{samplingfreq}. So when multiplied with
-the input image's DFT, the coefficients or magnitudes (see
-@ref{Circles and the complex plane}) of the smallest frequency (or the
-sum of the input image pixels) remains unchanged, while the magnitudes
-of the higher frequencies are significantly reduced.
-
-As we saw in @ref{Sampling theorem}, the Fourier transform of a
-discrete input will be infinitely repeated. In the final inverse DFT
-step, the input is in the frequency domain (the multiplied DFT of the
-input image and the kernel DFT). So the result (our output convolved
-image) will be infinitely repeated in the spatial domain. In order to
-accurately reconstruct the input image, we need all the frequencies
-with the correct magnitudes. However, when the magnitudes of higher
-frequencies are decreased, longer periods (shorter frequencies) will
-dominate in the reconstructed pixel values. Therefore, when
-constructing a pixel on the edge of the image, the newly empowered
-longer periods will look beyond the input image edges and will find
-the repeated input image there. So if you convolve an image in this
-fashion using the convolution theorem, when a bright object exists on
-one edge of the image, its blurred wings will be present on the other
-side of the convolved image. This is often termed as circular
-convolution or cyclic convolution.
-
-So, as long as we are dealing with convolution in the frequency domain,
-there is nothing we can do about the image edges. The least we can do
-is to eliminate the ghosts of the other side of the image. So, we add
-zero valued pixels to both the input image and the kernel in both
-dimensions so the image that will be convolved has a size equal to
-the sum of both images in each dimension. Of course, the effect of this
-zero-padding is that the sides of the output convolved image will
-become dark. To put it another way, the edges are going to drain the
-flux from nearby objects. But at least it is consistent across all the
-edges of the image and is predictable. In Convolve, you can see the
-padded images when inspecting the frequency domain convolution steps
-with the @option{--viewfreqsteps} option.
+With a good grasp of the frequency domain, we can revisit the problem of 
convolution on the image edges, see @ref{Edges in the spatial domain}.
+When we apply the convolution theorem (see @ref{Convolution theorem}) to 
convolve an image, we first take the discrete Fourier transforms (DFT, 
@ref{Discrete Fourier transform}) of both the input image and the kernel, then 
we multiply them with each other and then take the inverse DFT to construct the 
convolved image.
+Of course, in order to multiply them with each other in the frequency domain, 
the two images have to be the same size, so let's assume that we pad the kernel 
(it is usually smaller than the input image) with zero valued pixels in both 
dimensions so it becomes the same size as the input image before the DFT.
+
+Having multiplied the two DFTs, we now apply the inverse DFT which is where 
the problem is usually created.
+If the DFT of the kernel only had values of 1 (unrealistic condition!) then 
there would be no problem and the inverse DFT of the multiplication would be 
identical with the input.
+However in real situations, the kernel's DFT has a maximum of 1 (because the 
sum of the kernel has to be one, see @ref{Convolution process}) and decreases 
something like the hypothetical profile of @ref{samplingfreq}.
+So when multiplied with the input image's DFT, the coefficients or magnitudes 
(see @ref{Circles and the complex plane}) of the smallest frequency (or the sum 
of the input image pixels) remains unchanged, while the magnitudes of the 
higher frequencies are significantly reduced.
+
+As we saw in @ref{Sampling theorem}, the Fourier transform of a discrete input 
will be infinitely repeated.
+In the final inverse DFT step, the input is in the frequency domain (the 
multiplied DFT of the input image and the kernel DFT).
+So the result (our output convolved image) will be infinitely repeated in the 
spatial domain.
+In order to accurately reconstruct the input image, we need all the 
frequencies with the correct magnitudes.
+However, when the magnitudes of higher frequencies are decreased, longer 
periods (shorter frequencies) will dominate in the reconstructed pixel values.
+Therefore, when constructing a pixel on the edge of the image, the newly 
empowered longer periods will look beyond the input image edges and will find 
the repeated input image there.
+So if you convolve an image in this fashion using the convolution theorem, 
when a bright object exists on one edge of the image, its blurred wings will be 
present on the other side of the convolved image.
+This is often termed as circular convolution or cyclic convolution.
+
+So, as long as we are dealing with convolution in the frequency domain, there 
is nothing we can do about the image edges.
+The least we can do is to eliminate the ghosts of the other side of the image.
+So, we add zero valued pixels to both the input image and the kernel in both 
dimensions so the image that will be convolved has a size equal to the sum of 
both images in each dimension.
+Of course, the effect of this zero-padding is that the sides of the output 
convolved image will become dark.
+To put it another way, the edges are going to drain the flux from nearby 
objects.
+But at least it is consistent across all the edges of the image and is 
predictable.
+In Convolve, you can see the padded images when inspecting the frequency 
domain convolution steps with the @option{--viewfreqsteps} option.
 
 
 @node Spatial vs. Frequency domain, Convolution kernel, Frequency domain and 
Fourier operations, Convolve
 @subsection Spatial vs. Frequency domain
 
-With the discussions above it might not be clear when to choose the
-spatial domain and when to choose the frequency domain. Here we will
-try to list the benefits of each.
+With the discussions above it might not be clear when to choose the spatial 
domain and when to choose the frequency domain.
+Here we will try to list the benefits of each.
 
 @noindent
 The spatial domain,
 @itemize
 @item
-Can correct for the edge effects of convolution, see @ref{Edges in the
-spatial domain}.
+Can correct for the edge effects of convolution, see @ref{Edges in the spatial 
domain}.
 
 @item
 Can operate on blank pixels.
 
 @item
-Can be faster than frequency domain when the kernel is small (in terms
-of the number of pixels on the sides).
+Can be faster than frequency domain when the kernel is small (in terms of the 
number of pixels on the sides).
 @end itemize
 
 @noindent
@@ -15065,61 +11088,42 @@ Will be much faster when the image and kernel are 
both large.
 @end itemize
 
 @noindent
-As a general rule of thumb, when working on an image of modeled
-profiles use the frequency domain and when working on an image of real
-(observed) objects use the spatial domain (corrected for the
-edges). The reason is that if you apply a frequency domain convolution
-to a real image, you are going to loose information on the edges and
-generally you don't want large kernels. But when you have made the
-profiles in the image yourself, you can just make a larger input
-image and crop the central parts to completely remove the edge effect,
-see @ref{If convolving afterwards}. Also due to oversampling, both the
-kernels and the images can become very large and the speed boost of
-frequency domain convolution will significantly improve the processing
-time, see @ref{Oversampling}.
+As a general rule of thumb, when working on an image of modeled profiles use 
the frequency domain and when working on an image of real (observed) objects 
use the spatial domain (corrected for the edges).
+The reason is that if you apply a frequency domain convolution to a real 
image, you are going to loose information on the edges and generally you don't 
want large kernels.
+But when you have made the profiles in the image yourself, you can just make a 
larger input image and crop the central parts to completely remove the edge 
effect, see @ref{If convolving afterwards}.
+Also due to oversampling, both the kernels and the images can become very 
large and the speed boost of frequency domain convolution will significantly 
improve the processing time, see @ref{Oversampling}.
 
 @node Convolution kernel, Invoking astconvolve, Spatial vs. Frequency domain, 
Convolve
 @subsection Convolution kernel
 
-All the programs that need convolution will need to be given a
-convolution kernel file and extension. In most cases (other than
-Convolve, see @ref{Convolve}) the kernel file name is
-optional. However, the extension is necessary and must be specified
-either on the command-line or at least one of the configuration files
-(see @ref{Configuration files}). Within Gnuastro, there are two ways
-to create a kernel image:
+All the programs that need convolution will need to be given a convolution 
kernel file and extension.
+In most cases (other than Convolve, see @ref{Convolve}) the kernel file name 
is optional.
+However, the extension is necessary and must be specified either on the 
command-line or at least one of the configuration files (see @ref{Configuration 
files}).
+Within Gnuastro, there are two ways to create a kernel image:
 
 @itemize
 
 @item
-MakeProfiles: You can use MakeProfiles to create a parametric (based
-on a radial function) kernel, see @ref{MakeProfiles}. By default
-MakeProfiles will make the Gaussian and Moffat profiles in a separate
-file so you can feed it into any of the programs.
+MakeProfiles: You can use MakeProfiles to create a parametric (based on a 
radial function) kernel, see @ref{MakeProfiles}.
+By default MakeProfiles will make the Gaussian and Moffat profiles in a 
separate file so you can feed it into any of the programs.
 
 @item
-ConvertType: You can write your own desired kernel into a text file
-table and convert it to a FITS file with ConvertType, see
-@ref{ConvertType}. Just be careful that the kernel has to have an odd
-number of pixels along its two axes, see @ref{Convolution
-process}. All the programs that do convolution will normalize the
-kernel internally, so if you choose this option, you don't have to
-worry about normalizing the kernel. Only within Convolve, there is an
-option to disable normalization, see @ref{Invoking astconvolve}.
+ConvertType: You can write your own desired kernel into a text file table and 
convert it to a FITS file with ConvertType, see @ref{ConvertType}.
+Just be careful that the kernel has to have an odd number of pixels along its 
two axes, see @ref{Convolution process}.
+All the programs that do convolution will normalize the kernel internally, so 
if you choose this option, you don't have to worry about normalizing the kernel.
+Only within Convolve, there is an option to disable normalization, see 
@ref{Invoking astconvolve}.
 
 @end itemize
 
 @noindent
-The two options to specify a kernel file name and its extension are
-shown below. These are common between all the programs that will do
-convolution.
+The two options to specify a kernel file name and its extension are shown 
below.
+These are common between all the programs that will do convolution.
 @table @option
 @item -k STR
 @itemx --kernel=STR
-The convolution kernel file name. The @code{BITPIX} (data type) value of
-this file can be any standard type and it does not necessarily have to be
-normalized. Several operations will be done on the kernel image prior to
-the program's processing:
+The convolution kernel file name.
+The @code{BITPIX} (data type) value of this file can be any standard type and 
it does not necessarily have to be normalized.
+Several operations will be done on the kernel image prior to the program's 
processing:
 
 @itemize
 
@@ -15133,19 +11137,16 @@ All blank pixels (see @ref{Blank pixels}) will be set 
to zero.
 It will be normalized so the sum of its pixels equal unity.
 
 @item
-It will be flipped so the convolved image has the same
-orientation. This is only relevant if the kernel is not circular. See
-@ref{Convolution process}.
+It will be flipped so the convolved image has the same orientation.
+This is only relevant if the kernel is not circular. See @ref{Convolution 
process}.
 @end itemize
 
 @item -U STR
 @itemx --khdu=STR
-The convolution kernel HDU. Although the kernel file name is optional,
-before running any of the programs, they need to have a value for
-@option{--khdu} even if the default kernel is to be used. So be sure to
-keep its value in at least one of the configuration files (see
-@ref{Configuration files}). By default, the system configuration file has a
-value.
+The convolution kernel HDU.
+Although the kernel file name is optional, before running any of the programs, 
they need to have a value for @option{--khdu} even if the default kernel is to 
be used.
+So be sure to keep its value in at least one of the configuration files (see 
@ref{Configuration files}).
+By default, the system configuration file has a value.
 
 @end table
 
@@ -15153,9 +11154,8 @@ value.
 @node Invoking astconvolve,  , Convolution kernel, Convolve
 @subsection Invoking Convolve
 
-Convolve an input dataset (2D image or 1D spectrum for example) with a
-known kernel, or make the kernel necessary to match two PSFs. The general
-template for Convolve is:
+Convolve an input dataset (2D image or 1D spectrum for example) with a known 
kernel, or make the kernel necessary to match two PSFs.
+The general template for Convolve is:
 
 @example
 $ astconvolve [OPTION...] ASTRdata
@@ -15182,64 +11182,50 @@ $ astconvolve --kernel=sharperimage.fits 
--makekernel=10           \
 $ echo "1 3 10 3 1" | sed 's/ /\n/g' | astconvolve spectra.fits -c14
 @end example
 
-The only argument accepted by Convolve is an input image file. Some of the
-options are the same between Convolve and some other Gnuastro
-programs. Therefore, to avoid repetition, they will not be repeated
-here. For the full list of options shared by all Gnuastro programs, please
-see @ref{Common options}. In particular, in the spatial domain, on a
-multi-dimensional datasets, convolve uses Gnuastro's tessellation to speed
-up the run, see @ref{Tessellation}. Common options related to tessellation
-are described in in @ref{Processing options}.
+The only argument accepted by Convolve is an input image file.
+Some of the options are the same between Convolve and some other Gnuastro 
programs.
+Therefore, to avoid repetition, they will not be repeated here.
+For the full list of options shared by all Gnuastro programs, please see 
@ref{Common options}.
+In particular, in the spatial domain, on a multi-dimensional datasets, 
convolve uses Gnuastro's tessellation to speed up the run, see 
@ref{Tessellation}.
+Common options related to tessellation are described in in @ref{Processing 
options}.
 
-1-dimensional datasets (for example spectra) are only read as columns
-within a table (see @ref{Tables} for more on how Gnuastro programs read
-tables). Note that currently 1D convolution is only implemented in the
-spatial domain and thus kernel-matching is also not supported.
+1-dimensional datasets (for example spectra) are only read as columns within a 
table (see @ref{Tables} for more on how Gnuastro programs read tables).
+Note that currently 1D convolution is only implemented in the spatial domain 
and thus kernel-matching is also not supported.
 
-Here we will only explain the options particular to Convolve. Run Convolve
-with @option{--help} in order to see the full list of options Convolve
-accepts, irrespective of where they are explained in this book.
+Here we will only explain the options particular to Convolve.
+Run Convolve with @option{--help} in order to see the full list of options 
Convolve accepts, irrespective of where they are explained in this book.
 
 @table @option
 
 @item --kernelcolumn
-Column containing the 1D kernel. When the input dataset is a 1-dimensional
-column, and the host table has more than one column, use this option to
-specify which column should be used.
+Column containing the 1D kernel.
+When the input dataset is a 1-dimensional column, and the host table has more 
than one column, use this option to specify which column should be used.
 
 @item --nokernelflip
-Do not flip the kernel after reading it the spatial domain
-convolution. This can be useful if the flipping has already been
-applied to the kernel.
+Do not flip the kernel after reading it the spatial domain convolution.
+This can be useful if the flipping has already been applied to the kernel.
 
-@item --nokernelnorm
-Do not normalize the kernel after reading it, such that the sum of its
-pixels is unity.
+@item --nokernelnormx
+Do not normalize the kernel after reading it, such that the sum of its pixels 
is unity.
 
 @item -d STR
 @itemx --domain=STR
 @cindex Discrete Fourier transform
-The domain to use for the convolution. The acceptable values are
-`@code{spatial}' and `@code{frequency}', corresponding to the respective
-domain.
+The domain to use for the convolution.
+The acceptable values are `@code{spatial}' and `@code{frequency}', 
corresponding to the respective domain.
 
-For large images, the frequency domain process will be more efficient than
-convolving in the spatial domain. However, the edges of the image will
-loose some flux (see @ref{Edges in the spatial domain}) and the image must
-not contain any blank pixels, see @ref{Spatial vs. Frequency domain}.
+For large images, the frequency domain process will be more efficient than 
convolving in the spatial domain.
+However, the edges of the image will loose some flux (see @ref{Edges in the 
spatial domain}) and the image must not contain any blank pixels, see 
@ref{Spatial vs. Frequency domain}.
 
 
 @item --checkfreqsteps
-With this option a file with the initial name of the output file will
-be created that is suffixed with @file{_freqsteps.fits}, all the steps
-done to arrive at the final convolved image are saved as extensions in
-this file. The extensions in order are:
+With this option a file with the initial name of the output file will be 
created that is suffixed with @file{_freqsteps.fits}, all the steps done to 
arrive at the final convolved image are saved as extensions in this file.
+The extensions in order are:
 
 @enumerate
 @item
-The padded input image. In frequency domain convolution the two images
-(input and convolved) have to be the same size and both should be
-padded by zeros.
+The padded input image.
+In frequency domain convolution the two images (input and convolved) have to 
be the same size and both should be padded by zeros.
 
 @item
 The padded kernel, similar to the above.
@@ -15250,86 +11236,61 @@ The padded kernel, similar to the above.
 @cindex Numbers, complex
 @cindex Fourier spectrum
 @cindex Spectrum, Fourier
-The Fourier spectrum of the forward Fourier transform of the input
-image. Note that the Fourier transform is a complex operation (and not
-view able in one image!)  So we either have to show the `Fourier
-spectrum' or the `Phase angle'. For the complex number
-@mymath{a+ib}, the Fourier spectrum is defined as
-@mymath{\sqrt{a^2+b^2}} while the phase angle is defined as
-@mymath{\arctan(b/a)}.
+The Fourier spectrum of the forward Fourier transform of the input image.
+Note that the Fourier transform is a complex operation (and not view able in 
one image!)  So we either have to show the `Fourier spectrum' or the `Phase 
angle'.
+For the complex number @mymath{a+ib}, the Fourier spectrum is defined as 
@mymath{\sqrt{a^2+b^2}} while the phase angle is defined as 
@mymath{\arctan(b/a)}.
 
 @item
-The Fourier spectrum of the forward Fourier transform of the kernel
-image.
+The Fourier spectrum of the forward Fourier transform of the kernel image.
 
 @item
-The Fourier spectrum of the multiplied (through complex arithmetic)
-transformed images.
+The Fourier spectrum of the multiplied (through complex arithmetic) 
transformed images.
 
 @item
 @cindex Round-off error
 @cindex Floating point round-off error
 @cindex Error, floating point round-off
-The inverse Fourier transform of the multiplied image. If you open it,
-you will see that the convolved image is now in the center, not on one
-side of the image as it started with (in the padded image of the first
-extension). If you are working on a mock image which originally had
-pixels of precisely 0.0, you will notice that in those parts that your
-convolved profile(s) did not convert, the values are now
-@mymath{\sim10^{-18}}, this is due to floating-point round off
-errors. Therefore in the final step (when cropping the central parts
-of the image), we also remove any pixel with a value less than
-@mymath{10^{-17}}.
+The inverse Fourier transform of the multiplied image.
+If you open it, you will see that the convolved image is now in the center, 
not on one side of the image as it started with (in the padded image of the 
first extension).
+If you are working on a mock image which originally had pixels of precisely 
0.0, you will notice that in those parts that your convolved profile(s) did not 
convert, the values are now @mymath{\sim10^{-18}}, this is due to 
floating-point round off errors.
+Therefore in the final step (when cropping the central parts of the image), we 
also remove any pixel with a value less than @mymath{10^{-17}}.
 @end enumerate
 
 @item --noedgecorrection
-Do not correct the edge effect in spatial domain convolution. For a full
-discussion, please see @ref{Edges in the spatial domain}.
+Do not correct the edge effect in spatial domain convolution.
+For a full discussion, please see @ref{Edges in the spatial domain}.
 
 @item -m INT
 @itemx --makekernel=INT
-(@option{=INT}) If this option is called, Convolve will do de-convolution
-(see @ref{Convolution theorem}). The image specified by the
-@option{--kernel} option is assumed to be the sharper (less blurry) image
-and the input image is assumed to be the more blurry image. The value given
-to this option will be used as the maximum radius of the kernel. Any pixel
-in the final kernel that is larger than this distance from the center will
-be set to zero. The two images must have the same size.
-
-Noise has large frequencies which can make the result less reliable for the
-higher frequencies of the final result. So all the frequencies which have a
-spectrum smaller than the value given to the @option{minsharpspec} option
-in the sharper input image are set to zero and not divided. This will cause
-the wings of the final kernel to be flatter than they would ideally be
-which will make the convolved image result unreliable if it is too
-high. Some notes to take into account for a good result:
-@itemize
+(@option{=INT}) If this option is called, Convolve will do de-convolution (see 
@ref{Convolution theorem}).
+The image specified by the @option{--kernel} option is assumed to be the 
sharper (less blurry) image and the input image is assumed to be the more 
blurry image.
+The value given to this option will be used as the maximum radius of the 
kernel.
+Any pixel in the final kernel that is larger than this distance from the 
center will be set to zero.
+The two images must have the same size.
 
+Noise has large frequencies which can make the result less reliable for the 
higher frequencies of the final result.
+So all the frequencies which have a spectrum smaller than the value given to 
the @option{minsharpspec} option in the sharper input image are set to zero and 
not divided.
+This will cause the wings of the final kernel to be flatter than they would 
ideally be which will make the convolved image result unreliable if it is too 
high.
+Some notes to take into account for a good result:
+
+@itemize
 @item
-Choose a bright (unsaturated) star and use a region box (with Crop for
-example, see @ref{Crop}) that is sufficiently above the noise.
+Choose a bright (unsaturated) star and use a region box (with Crop for 
example, see @ref{Crop}) that is sufficiently above the noise.
 
 @item
-Use Warp (see @ref{Warp}) to warp the pixel grid so the star's center is
-exactly on the center of the central pixel in the cropped image. This will
-certainly slightly degrade the result, however, it is necessary. If there
-are multiple good stars, you can shift all of them, then normalize them (so
-the sum of each star's pixels is one) and then take their average to
-decrease this effect.
+Use Warp (see @ref{Warp}) to warp the pixel grid so the star's center is 
exactly on the center of the central pixel in the cropped image.
+This will certainly slightly degrade the result, however, it is necessary.
+If there are multiple good stars, you can shift all of them, then normalize 
them (so the sum of each star's pixels is one) and then take their average to 
decrease this effect.
 
 @item
-The shifting might move the center of the star by one pixel in any
-direction, so crop the central pixel of the warped image to have a
-clean image for the de-convolution.
+The shifting might move the center of the star by one pixel in any direction, 
so crop the central pixel of the warped image to have a clean image for the 
de-convolution.
 @end itemize
 
 Note that this feature is not yet supported in 1-dimensional datasets.
 
 @item -c
 @itemx --minsharpspec
-(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel
-value in the frequency domain image) to use in deconvolution, see the
-explanations under the @option{--makekernel} option for more information.
+(@option{=FLT}) The minimum frequency spectrum (or coefficient, or pixel value 
in the frequency domain image) to use in deconvolution, see the explanations 
under the @option{--makekernel} option for more information.
 @end table
 
 
@@ -15344,69 +11305,45 @@ explanations under the @option{--makekernel} option 
for more information.
 
 @node Warp,  , Convolve, Data manipulation
 @section Warp
-Image warping is the process of mapping the pixels of one image onto a new
-pixel grid. This process is sometimes known as transformation, however
-following the discussion of Heckbert 1989@footnote{Paul
-S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image
-Warping}, Master's thesis at University of California, Berkeley.} we will
-not be using that term because it can be confused with only pixel value or
-flux transformations. Here we specifically mean the pixel grid
-transformation which is better conveyed with `warp'.
+Image warping is the process of mapping the pixels of one image onto a new 
pixel grid.
+This process is sometimes known as transformation, however following the 
discussion of Heckbert 1989@footnote{Paul S. Heckbert. 1989. @emph{Fundamentals 
of Texture mapping and Image Warping}, Master's thesis at University of 
California, Berkeley.} we will not be using that term because it can be 
confused with only pixel value or flux transformations.
+Here we specifically mean the pixel grid transformation which is better 
conveyed with `warp'.
 
 @cindex Gravitational lensing
-Image wrapping is a very important step in astronomy, both in observational
-data analysis and in simulating modeled images. In modeling, warping an
-image is necessary when we want to apply grid transformations to the
-initial models, for example in simulating gravitational lensing (Radial
-warpings are not yet included in Warp). Observational reasons for warping
-an image are listed below:
+Image wrapping is a very important step in astronomy, both in observational 
data analysis and in simulating modeled images.
+In modeling, warping an image is necessary when we want to apply grid 
transformations to the initial models, for example in simulating gravitational 
lensing (Radial warpings are not yet included in Warp).
+Observational reasons for warping an image are listed below:
 
 @itemize
 
 @cindex Signal to noise ratio
 @item
-@strong{Noise:} Most scientifically interesting targets are inherently
-faint (have a very low Signal to noise ratio). Therefore one short
-exposure is not enough to detect such objects that are drowned deeply
-in the noise. We need multiple exposures so we can add them together
-and increase the objects' signal to noise ratio. Keeping the telescope
-fixed on one field of the sky is practically impossible. Therefore
-very deep observations have to put into the same grid before adding
-them.
+@strong{Noise:} Most scientifically interesting targets are inherently faint 
(have a very low Signal to noise ratio).
+Therefore one short exposure is not enough to detect such objects that are 
drowned deeply in the noise.
+We need multiple exposures so we can add them together and increase the 
objects' signal to noise ratio.
+Keeping the telescope fixed on one field of the sky is practically impossible.
+Therefore very deep observations have to put into the same grid before adding 
them.
 
 @cindex Mosaicing
 @cindex Image mosaic
 @item
-@strong{Resolution:} If we have multiple images of one patch of the
-sky (hopefully at multiple orientations) we can warp them to the same
-grid. The multiple orientations will allow us to `guess' the values of
-pixels on an output pixel grid that has smaller pixel sizes and thus
-increase the resolution of the output. This process of merging
-multiple observations is known as Mosaicing.
+@strong{Resolution:} If we have multiple images of one patch of the sky 
(hopefully at multiple orientations) we can warp them to the same grid.
+The multiple orientations will allow us to `guess' the values of pixels on an 
output pixel grid that has smaller pixel sizes and thus increase the resolution 
of the output.
+This process of merging multiple observations is known as Mosaicing.
 
 @cindex Cosmic rays
 @item
-@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an
-image. If they collide vertically with the camera, they are going to
-create a very sharp and bright spot that in most cases can be separated
-easily@footnote{All astronomical targets are blurred with the PSF, see
-@ref{PSF}, however a cosmic ray is not and so it is very sharp (it
-suddenly stops at one pixel).}. However, depending on the depth of the
-camera pixels, and the angle that a cosmic rays collides with it, it
-can cover a line-like larger area on the CCD which makes the detection
-using their sharp edges very hard and error prone. One of the best
-methods to remove cosmic rays is to compare multiple images of the
-same field. To do that, we need all the images to be on the same pixel
-grid.
+@strong{Cosmic rays:} Cosmic rays can randomly fall on any part of an image.
+If they collide vertically with the camera, they are going to create a very 
sharp and bright spot that in most cases can be separated easily@footnote{All 
astronomical targets are blurred with the PSF, see @ref{PSF}, however a cosmic 
ray is not and so it is very sharp (it suddenly stops at one pixel).}.
+However, depending on the depth of the camera pixels, and the angle that a 
cosmic rays collides with it, it can cover a line-like larger area on the CCD 
which makes the detection using their sharp edges very hard and error prone.
+One of the best methods to remove cosmic rays is to compare multiple images of 
the same field.
+To do that, we need all the images to be on the same pixel grid.
 
 @cindex Optical distortion
 @cindex Distortion, optical
 @item
-@strong{Optical distortion:} (Not yet included in Warp) In wide field
-images, the optical distortion that occurs on the outer parts of the focal
-plane will make accurate comparison of the objects at various locations
-impossible. It is therefore necessary to warp the image and correct for
-those distortions prior to the analysis.
+@strong{Optical distortion:} (Not yet included in Warp) In wide field images, 
the optical distortion that occurs on the outer parts of the focal plane will 
make accurate comparison of the objects at various locations impossible.
+It is therefore necessary to warp the image and correct for those distortions 
prior to the analysis.
 
 @cindex ACS
 @cindex CCD
@@ -15416,10 +11353,7 @@ those distortions prior to the analysis.
 @cindex Advanced camera for surveys
 @cindex Hubble Space Telescope (HST)
 @item
-@strong{Detector not on focal plane:} In some cases (like the Hubble
-Space Telescope ACS and WFC3 cameras), the CCD might be tilted
-compared to the focal plane, therefore the recorded CCD pixels have to
-be projected onto the focal plane before further analysis.
+@strong{Detector not on focal plane:} In some cases (like the Hubble Space 
Telescope ACS and WFC3 cameras), the CCD might be tilted compared to the focal 
plane, therefore the recorded CCD pixels have to be projected onto the focal 
plane before further analysis.
 
 @end itemize
 
@@ -15435,14 +11369,8 @@ be projected onto the focal plane before further 
analysis.
 
 @cindex Scaling
 @cindex Coordinate transformation
-Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of
-a point in the input image and @mymath{\left[\matrix{x&y}\right]} as the
-coordinates of that same point in the output image@footnote{These can
-be any real number, we are not necessarily talking about integer
-pixels here.}. The simplest form of coordinate transformation (or
-warping) is the scaling of the coordinates, let's assume we want to
-scale the first axis by @mymath{M} and the second by @mymath{N}, the
-output coordinates of that point can be calculated by
+Let's take @mymath{\left[\matrix{u&v}\right]} as the coordinates of a point in 
the input image and @mymath{\left[\matrix{x&y}\right]} as the coordinates of 
that same point in the output image@footnote{These can be any real number, we 
are not necessarily talking about integer pixels here.}.
+The simplest form of coordinate transformation (or warping) is the scaling of 
the coordinates, let's assume we want to scale the first axis by @mymath{M} and 
the second by @mymath{N}, the output coordinates of that point can be 
calculated by
 
 @dispmath{\left[\matrix{x\cr y}\right]=
           \left[\matrix{Mu\cr Nv}\right]=
@@ -15452,12 +11380,10 @@ output coordinates of that point can be calculated by
 @cindex Multiplication, Matrix
 @cindex Rotation of coordinates
 @noindent
-Note that these are matrix multiplications. We thus see that we can
-represent any such grid warping as a matrix. Another thing we can do
-with this @mymath{2\times2} matrix is to rotate the output coordinate
-around the common center of both coordinates. If the output is rotated
-anticlockwise by @mymath{\theta} degrees from the positive (to the
-right) horizontal axis, then the warping matrix should become:
+Note that these are matrix multiplications.
+We thus see that we can represent any such grid warping as a matrix.
+Another thing we can do with this @mymath{2\times2} matrix is to rotate the 
output coordinate around the common center of both coordinates.
+If the output is rotated anticlockwise by @mymath{\theta} degrees from the 
positive (to the right) horizontal axis, then the warping matrix should become:
 
 @dispmath{\left[\matrix{x\cr y}\right]=
    \left[\matrix{ucos\theta-vsin\theta\cr usin\theta+vcos\theta}\right]=
@@ -15467,9 +11393,7 @@ right) horizontal axis, then the warping matrix should 
become:
 
 @cindex Flip coordinates
 @noindent
-We can also flip the coordinates around the first axis, the second
-axis and the coordinate center with the following three matrices
-respectively:
+We can also flip the coordinates around the first axis, the second axis and 
the coordinate center with the following three matrices respectively:
 
 @dispmath{\left[\matrix{1&0\cr0&-1}\right]\quad\quad
           \left[\matrix{-1&0\cr0&1}\right]\quad\quad
@@ -15477,59 +11401,41 @@ respectively:
 
 @cindex Shear
 @noindent
-The final thing we can do with this definition of a @mymath{2\times2}
-warping matrix is shear. If we want the output to be sheared along the
-first axis with @mymath{A} and along the second with @mymath{B}, then
-we can use the matrix:
+The final thing we can do with this definition of a @mymath{2\times2} warping 
matrix is shear.
+If we want the output to be sheared along the first axis with @mymath{A} and 
along the second with @mymath{B}, then we can use the matrix:
 
 @dispmath{\left[\matrix{1&A\cr B&1}\right]}
 
 @noindent
-To have one matrix representing any combination of these steps, you
-use matrix multiplication, see @ref{Merging multiple warpings}. So any
-combinations of these transformations can be displayed with one
-@mymath{2\times2} matrix:
+To have one matrix representing any combination of these steps, you use matrix 
multiplication, see @ref{Merging multiple warpings}.
+So any combinations of these transformations can be displayed with one 
@mymath{2\times2} matrix:
 
 @dispmath{\left[\matrix{a&b\cr c&d}\right]}
 
 @cindex Wide Field Camera 3
 @cindex Advanced Camera for Surveys
 @cindex Hubble Space Telescope (HST)
-The transformations above can cover a lot of the needs of most
-coordinate transformations. However they are limited to mapping the
-point @mymath{[\matrix{0&0}]} to @mymath{[\matrix{0&0}]}. Therefore
-they are useless if you want one coordinate to be shifted compared to
-the other one. They are also space invariant, meaning that all the
-coordinates in the image will receive the same transformation. In
-other words, all the pixels in the output image will have the same
-area if placed over the input image. So transformations which require
-varying output pixel sizes like projections cannot be applied through
-this @mymath{2\times2} matrix either (for example for the tilted ACS
-and WFC3 camera detectors on board the Hubble space telescope).
+The transformations above can cover a lot of the needs of most coordinate 
transformations.
+However they are limited to mapping the point @mymath{[\matrix{0&0}]} to 
@mymath{[\matrix{0&0}]}.
+Therefore they are useless if you want one coordinate to be shifted compared 
to the other one.
+They are also space invariant, meaning that all the coordinates in the image 
will receive the same transformation.
+In other words, all the pixels in the output image will have the same area if 
placed over the input image.
+So transformations which require varying output pixel sizes like projections 
cannot be applied through this @mymath{2\times2} matrix either (for example for 
the tilted ACS and WFC3 camera detectors on board the Hubble space telescope).
 
 @cindex M@"obius, August. F.
 @cindex Homogeneous coordinates
-@cindex Coordinates, homogeneous
-To add these further capabilities, namely translation and projection,
-we use the homogeneous coordinates. They were defined about 200 years
-ago by August Ferdinand M@"obius (1790 -- 1868). For simplicity, we
-will only discuss points on a 2D plane and avoid the complexities of
-higher dimensions. We cannot provide a deep mathematical introduction
-here, interested readers can get a more detailed explanation from
-Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}}
-and the references therein.
-
-By adding an extra coordinate to a point we can add the flexibility we
-need. The point @mymath{[\matrix{x&y}]} can be represented as
-@mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates. Therefore
-multiplying all the coordinates of a point in the homogeneous
-coordinates with a constant will give the same point. Put another way,
-the point @mymath{[\matrix{x&y&Z}]} corresponds to the point
-@mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane. Setting
-@mymath{Z=1}, we get the input image plane, so
-@mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}. With
-this definition, the transformations above can be generally written
-as:
+@cindex Coordinates, homogeneou
+To add these further capabilities, namely translation and projection, we use 
the homogeneous coordinates.
+They were defined about 200 years ago by August Ferdinand M@"obius (1790 -- 
1868).
+For simplicity, we will only discuss points on a 2D plane and avoid the 
complexities of higher dimensions.
+We cannot provide a deep mathematical introduction here, interested readers 
can get a more detailed explanation from 
Wikipedia@footnote{@url{http://en.wikipedia.org/wiki/Homogeneous_coordinates}} 
and the references therein.
+
+By adding an extra coordinate to a point we can add the flexibility we need.
+The point @mymath{[\matrix{x&y}]} can be represented as 
@mymath{[\matrix{xZ&yZ&Z}]} in homogeneous coordinates.
+Therefore multiplying all the coordinates of a point in the homogeneous 
coordinates with a constant will give the same point.
+Put another way, the point @mymath{[\matrix{x&y&Z}]} corresponds to the point 
@mymath{[\matrix{x/Z&y/Z}]} on the constant @mymath{Z} plane.
+Setting @mymath{Z=1}, we get the input image plane, so 
@mymath{[\matrix{u&v&1}]} corresponds to @mymath{[\matrix{u&v}]}.
+With this definition, the transformations above can be generally written as:
 
 @dispmath{\left[\matrix{x\cr y\cr 1}\right]=
           \left[\matrix{a&b&0\cr c&d&0\cr 0&0&1}\right]
@@ -15538,24 +11444,17 @@ as:
 @noindent
 @cindex Affine Transformation
 @cindex Transformation, affine
-We thus acquired 4 extra degrees of freedom. By giving non-zero values
-to the zero valued elements of the last column we can have translation
-(try the matrix multiplication!). In general, any coordinate
-transformation that is represented by the matrix below is known as an
-affine
-transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:
+We thus acquired 4 extra degrees of freedom.
+By giving non-zero values to the zero valued elements of the last column we 
can have translation (try the matrix multiplication!).
+In general, any coordinate transformation that is represented by the matrix 
below is known as an affine 
transformation@footnote{@url{http://en.wikipedia.org/wiki/Affine_transformation}}:
 
 @dispmath{\left[\matrix{a&b&c\cr d&e&f\cr 0&0&1}\right]}
 
 @cindex Homography
 @cindex Projective transformation
 @cindex Transformation, projective
-We can now consider translation, but the affine transform is still
-spatially invariant. Giving non-zero values to the other two elements
-in the matrix above gives us the projective transformation or
-Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}}
-which is the most general type of transformation with the
-@mymath{3\times3} matrix:
+We can now consider translation, but the affine transform is still spatially 
invariant.
+Giving non-zero values to the other two elements in the matrix above gives us 
the projective transformation or 
Homography@footnote{@url{http://en.wikipedia.org/wiki/Homography}} which is the 
most general type of transformation with the @mymath{3\times3} matrix:
 
 @dispmath{\left[\matrix{x'\cr y'\cr w}\right]=
           \left[\matrix{a&b&c\cr d&e&f\cr g&h&1}\right]
@@ -15567,20 +11466,15 @@ So the output coordinates can be calculated from:
 @dispmath{x={x' \over w}={au+bv+c \over gu+hv+1}\quad\quad\quad\quad
           y={y' \over w}={du+ev+f \over gu+hv+1}}
 
-Thus with Homography we can change the sizes of the output pixels on
-the input plane, giving a `perspective'-like visual impression. This
-can be quantitatively seen in the two equations above. When
-@mymath{g=h=0}, the denominator is independent of @mymath{u} or
-@mymath{v} and thus we have spatial invariance. Homography preserves
-lines at all orientations. A very useful fact about Homography is that
-its inverse is also a Homography. These two properties play a very
-important role in the implementation of this transformation. A short
-but instructive and illustrated review of affine, projective and also
-bi-linear mappings is provided in Heckbert 1989@footnote{Paul
-S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image
-Warping}, Master's thesis at University of California, Berkeley. Note
-that since points are defined as row vectors there, the matrix is the
-transpose of the one discussed here.}.
+Thus with Homography we can change the sizes of the output pixels on the input 
plane, giving a `perspective'-like visual impression.
+This can be quantitatively seen in the two equations above.
+When @mymath{g=h=0}, the denominator is independent of @mymath{u} or 
@mymath{v} and thus we have spatial invariance.
+Homography preserves lines at all orientations.
+A very useful fact about Homography is that its inverse is also a Homography.
+These two properties play a very important role in the implementation of this 
transformation.
+A short but instructive and illustrated review of affine, projective and also 
bi-linear mappings is provided in Heckbert 1989@footnote{
+Paul S. Heckbert. 1989. @emph{Fundamentals of Texture mapping and Image 
Warping}, Master's thesis at University of California, Berkeley.
+Note that since points are defined as row vectors there, the matrix is the 
transpose of the one discussed here.}.
 
 @node Merging multiple warpings, Resampling, Warping basics, Warp
 @subsection Merging multiple warpings
@@ -15590,24 +11484,17 @@ transpose of the one discussed here.}.
 @cindex Multiplication, matrix
 @cindex Non-commutative operations
 @cindex Operations, non-commutative
-In @ref{Warping basics} we saw how a basic warp/transformation can be
-represented with a matrix. To make more complex warpings (for example to
-define a translation, rotation and scale as one warp) the individual
-matrices have to be multiplied through matrix multiplication. However
-matrix multiplication is not commutative, so the order of the set of
-matrices you use for the multiplication is going to be very important.
-
-The first warping should be placed as the left-most matrix. The second
-warping to the right of that and so on. The second transformation is
-going to occur on the warped coordinates of the first.  As an example
-for merging a few transforms into one matrix, the multiplication below
-represents the rotation of an image about a point
-@mymath{[\matrix{U&V}]} anticlockwise from the horizontal axis by an
-angle of @mymath{\theta}. To do this, first we take the origin to
-@mymath{[\matrix{U&V}]} through translation. Then we rotate the image,
-then we translate it back to where it was initially. These three
-operations can be merged in one operation by calculating the matrix
-multiplication below:
+In @ref{Warping basics} we saw how a basic warp/transformation can be 
represented with a matrix.
+To make more complex warpings (for example to define a translation, rotation 
and scale as one warp) the individual matrices have to be multiplied through 
matrix multiplication.
+However matrix multiplication is not commutative, so the order of the set of 
matrices you use for the multiplication is going to be very important.
+
+The first warping should be placed as the left-most matrix.
+The second warping to the right of that and so on.
+The second transformation is going to occur on the warped coordinates of the 
first.
+As an example for merging a few transforms into one matrix, the multiplication 
below represents the rotation of an image about a point @mymath{[\matrix{U&V}]} 
anticlockwise from the horizontal axis by an angle of @mymath{\theta}.
+To do this, first we take the origin to @mymath{[\matrix{U&V}]} through 
translation.
+Then we rotate the image, then we translate it back to where it was initially.
+These three operations can be merged in one operation by calculating the 
matrix multiplication below:
 
 @dispmath{\left[\matrix{1&0&U\cr0&1&V\cr{}0&0&1}\right]
           \left[\matrix{cos\theta&-sin\theta&0\cr sin\theta&cos\theta&0\cr     
           0&0&1}\right]
@@ -15629,20 +11516,13 @@ multiplication below:
 @cindex Photoelectrons
 @cindex Picture element
 @cindex Mixing pixel values
-A digital image is composed of discrete `picture elements' or
-`pixels'. When a real image is created from a camera or detector, each
-pixel's area is used to store the number of photo-electrons that were
-created when incident photons collided with that pixel's surface area. This
-process is called the `sampling' of a continuous or analog data into
-digital data. When we change the pixel grid of an image or warp it as we
-defined in @ref{Warping basics}, we have to `guess' the flux value of each
-pixel on the new grid based on the old grid, or re-sample it. Because of
-the `guessing', any form of warping on the data is going to degrade the
-image and mix the original pixel values with each other. So if an analysis
-can be done on an unwarped data image, it is best to leave the image
-untouched and pursue the analysis. However as discussed in @ref{Warp} this
-is not possible most of the times, so we have to accept the problem and
-re-sample the image.
+A digital image is composed of discrete `picture elements' or `pixels'.
+When a real image is created from a camera or detector, each pixel's area is 
used to store the number of photo-electrons that were created when incident 
photons collided with that pixel's surface area.
+This process is called the `sampling' of a continuous or analog data into 
digital data.
+When we change the pixel grid of an image or warp it as we defined in 
@ref{Warping basics}, we have to `guess' the flux value of each pixel on the 
new grid based on the old grid, or re-sample it.
+Because of the `guessing', any form of warping on the data is going to degrade 
the image and mix the original pixel values with each other.
+So if an analysis can be done on an unwarped data image, it is best to leave 
the image untouched and pursue the analysis.
+However as discussed in @ref{Warp} this is not possible most of the times, so 
we have to accept the problem and re-sample the image.
 
 @cindex Point pixels
 @cindex Interpolation
@@ -15651,72 +11531,47 @@ re-sample the image.
 @cindex Bi-linear interpolation
 @cindex Interpolation, bicubic
 @cindex Interpolation, bi-linear
-In most applications of image processing, it is sufficient to consider
-each pixel to be a point and not an area. This assumption can
-significantly speed up the processing of an image and also the
-simplicity of the code. It is a fine assumption when the
-signal to noise ratio of the objects are very large. The question will
-then be one of interpolation because you have multiple points
-distributed over the output image and you want to find the values at
-the pixel centers. To increase the accuracy, you might also sample
-more than one point from within a pixel giving you more points for a
-more accurate interpolation in the output grid.
+In most applications of image processing, it is sufficient to consider each 
pixel to be a point and not an area.
+This assumption can significantly speed up the processing of an image and also 
the simplicity of the code.
+It is a fine assumption when the signal to noise ratio of the objects are very 
large.
+The question will then be one of interpolation because you have multiple 
points distributed over the output image and you want to find the values at the 
pixel centers.
+To increase the accuracy, you might also sample more than one point from 
within a pixel giving you more points for a more accurate interpolation in the 
output grid.
 
 @cindex Image edges
 @cindex Edges, image
-However, interpolation has several problems. The first one is that it will
-depend on the type of function you want to assume for the
-interpolation. For example you can choose a bi-linear or bi-cubic (the
-`bi's are for the 2 dimensional nature of the data) interpolation
-method. For the latter there are various ways to set the
-constants@footnote{see @url{http://entropymine.com/imageworsener/bicubic/}
-for a nice introduction.}. Such functional interpolation functions can fail
-seriously on the edges of an image. They will also need normalization so
-that the flux of the objects before and after the warpings are
-comparable. The most basic problem with such techniques is that they are
-based on a point while a detector pixel is an area. They add a level of
-subjectivity to the data (make more assumptions through the functions than
-the data can handle). For most applications this is fine, but in scientific
-applications where detection of the faintest possible galaxies or fainter
-parts of bright galaxies is our aim, we cannot afford this loss. Because of
-these reasons Warp will not use such interpolation techniques.
+However, interpolation has several problems.
+The first one is that it will depend on the type of function you want to 
assume for the interpolation.
+For example you can choose a bi-linear or bi-cubic (the `bi's are for the 2 
dimensional nature of the data) interpolation method.
+For the latter there are various ways to set the constants@footnote{see 
@url{http://entropymine.com/imageworsener/bicubic/} for a nice introduction.}.
+Such functional interpolation functions can fail seriously on the edges of an 
image.
+They will also need normalization so that the flux of the objects before and 
after the warpings are comparable.
+The most basic problem with such techniques is that they are based on a point 
while a detector pixel is an area.
+They add a level of subjectivity to the data (make more assumptions through 
the functions than the data can handle).
+For most applications this is fine, but in scientific applications where 
detection of the faintest possible galaxies or fainter parts of bright galaxies 
is our aim, we cannot afford this loss.
+Because of these reasons Warp will not use such interpolation techniques.
 
 @cindex Drizzle
 @cindex Pixel mixing
 @cindex Exact area resampling
-Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic
-demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.}
-or ``area resampling''. This is also what the Hubble Space Telescope
-pipeline calls
-``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
 This
-technique requires no functions, it is thus non-parametric. It is also the
-closest we can get (make least assumptions) to what actually happens on the
-detector pixels. The basic idea is that you reverse-transform each output
-pixel to find which pixels of the input image it covers and what fraction
-of the area of the input pixels are covered. To find the output pixel
-value, you simply sum the value of each input pixel weighted by the overlap
-fraction (between 0 to 1) of the output pixel and that input pixel. Through
-this process, pixels are treated as an area not as a point (which is how
-detectors create the image), also the brightness (see @ref{Flux Brightness
-and magnitude}) of an object will be left completely unchanged.
-
-If there are very high spatial-frequency signals in the image (for
-example fringes) which vary on a scale smaller than your output image
-pixel size, pixel mixing can cause
-ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}. So if
-the input image has fringes, they have to be calculated and removed
-separately (which would naturally be done in any astronomical
-application). Because of the PSF no astronomical target has a sharp
-change in the signal so this issue is less important for astronomical
-applications, see @ref{PSF}.
+Warp will do interpolation based on ``pixel mixing''@footnote{For a graphic 
demonstration see @url{http://entropymine.com/imageworsener/pixelmixing/}.} or 
``area resampling''.
+This is also what the Hubble Space Telescope pipeline calls 
``Drizzling''@footnote{@url{http://en.wikipedia.org/wiki/Drizzle_(image_processing)}}.
+This technique requires no functions, it is thus non-parametric.
+It is also the closest we can get (make least assumptions) to what actually 
happens on the detector pixels.
+The basic idea is that you reverse-transform each output pixel to find which 
pixels of the input image it covers and what fraction of the area of the input 
pixels are covered.
+To find the output pixel value, you simply sum the value of each input pixel 
weighted by the overlapfraction (between 0 to 1) of the output pixel and that 
input pixel.
+Through this process, pixels are treated as an area not as a point (which is 
how detectors create the image), also the brightness (see @ref{Flux Brightness 
and magnitude}) of an object will be left completely unchanged.
+
+If there are very high spatial-frequency signals in the image (for example 
fringes) which vary on a scale smaller than your output image pixel size, pixel 
mixing can cause 
ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
+So if the input image has fringes, they have to be calculated and removed 
separately (which would naturally be done in any astronomical  application).
+Because of the PSF no astronomical target has a sharpchange in the signal so 
this issue is less important for astronomical applications, see @ref{PSF}.
 
 
 @node Invoking astwarp,  , Resampling, Warp
 @subsection Invoking Warp
 
-Warp an input dataset into a new grid. Any homographic warp (for example
-scaling, rotation, translation, projection) is acceptable, see @ref{Warping
-basics} for the definitions. The general template for invoking Warp is:
+Warp an input dataset into a new grid.
+Any homographic warp (for example scaling, rotation, translation, projection) 
is acceptable, see @ref{Warping basics} for the definitions.
+The general template for invoking Warp is:
 
 @example
 $ astwarp [OPTIONS...] InputImage
@@ -15742,195 +11597,134 @@ $ astwarp --matrix=1/5,0,4/10,0,1/5,4/10,0,0,1 
image.fits
 $ astwarp --matrix="0.7071,-0.7071,  0.7071,0.7071" image.fits
 @end example
 
-If any processing is to be done, Warp can accept one file as input. As in
-all Gnuastro programs, when an output is not explicitly set with the
-@option{--output} option, the output filename will be set automatically
-based on the operation, see @ref{Automatic output}. For the full list of
-general options to all Gnuastro programs (including Warp), please see
-@ref{Common options}.
+If any processing is to be done, Warp can accept one file as input.
+As in all Gnuastro programs, when an output is not explicitly set with the 
@option{--output} option, the output filename will be set automatically based 
on the operation, see @ref{Automatic output}.
+For the full list of general options to all Gnuastro programs (including 
Warp), please see @ref{Common options}.
 
-To be the most accurate, the input image will be read as a 64-bit double
-precision floating point dataset and all internal processing is done in
-this format (including the raw output type). You can use the common
-@option{--type} option to write the output in any type you want, see
-@ref{Numeric data types}.
+To be the most accurate, the input image will be read as a 64-bit double 
precision floating point dataset and all internal processing is done in this 
format (including the raw output type).
+You can use the common @option{--type} option to write the output in any type 
you want, see @ref{Numeric data types}.
 
-Warps must be specified as command-line options, either as (possibly
-multiple) modular warpings (for example @option{--rotate}, or
-@option{--scale}), or directly as a single raw matrix (with
-@option{--matrix}). If specified together, the latter (direct matrix) will
-take precedence and all the modular warpings will be ignored. Any number of
-modular warpings can be specified on the command-line and configuration
-files. If more than one modular warping is given, all will be merged to
-create one warping matrix. As described in @ref{Merging multiple warpings},
-matrix multiplication is not commutative, so the order of specifying the
-modular warpings on the command-line, and/or configuration files makes a
-difference (see @ref{Configuration file precedence}). The full list of
-modular warpings and the other options particular to Warp are described
-below.
+Warps must be specified as command-line options, either as (possibly multiple) 
modular warpings (for example @option{--rotate}, or @option{--scale}), or 
directly as a single raw matrix (with @option{--matrix}).
+If specified together, the latter (direct matrix) will take precedence and all 
the modular warpings will be ignored.
+Any number of modular warpings can be specified on the command-line and 
configuration files.
+If more than one modular warping is given, all will be merged to create one 
warping matrix.
+As described in @ref{Merging multiple warpings}, matrix multiplication is not 
commutative, so the order of specifying the modular warpings on the 
command-line, and/or configuration files makes a difference (see 
@ref{Configuration file precedence}).
+The full list of modular warpings and the other options particular to Warp are 
described below.
 
-The values to the warping options (modular warpings as well as
-@option{--matrix}), are a sequence of at least one number. Each number in
-this sequence is separated from the next by a comma (@key{,}). Each number
-can also be written as a single fraction (with a forward-slash @key{/}
-between the numerator and denominator). Space and Tab characters are
-permitted between any two numbers, just don't forget to quote the whole
-value. Otherwise, the value will not be fully passed onto the option. See
-the examples above as a demonstration.
+The values to the warping options (modular warpings as well as 
@option{--matrix}), are a sequence of at least one number.
+Each number in this sequence is separated from the next by a comma (@key{,}).
+Each number can also be written as a single fraction (with a forward-slash 
@key{/} between the numerator and denominator).
+Space and Tab characters arepermitted between any two numbers, just don't 
forget to quote the whole value.
+Otherwise, the value will not be fully passed onto the option.
+See the examples above as a demonstration.
 
 @cindex FITS standard
-Based on the FITS standard, integer values are assigned to the center of a
-pixel and the coordinate [1.0, 1.0] is the center of the first pixel
-(bottom left of the image when viewed in SAO ds9). So the coordinate center
-[0.0, 0.0] is half a pixel away (in each axis) from the bottom left vertex
-of the first pixel. The resampling that is done in Warp (see
-@ref{Resampling}) is done on the coordinate axes and thus directly
-depends on the coordinate center. In some situations this if fine, for
-example when rotating/aligning a real image, all the edge pixels will be
-similarly affected. But in other situations (for example when scaling an
-over-sampled mock image to its intended resolution, this is not desired:
-you want the center of the coordinates to be on the corner of the pixel. In
-such cases, you can use the @option{--centeroncorner} option which will
-shift the center by @mymath{0.5} before the main warp, then shift it back
-by @mymath{-0.5} after the main warp, see below.
+Based on the FITS standard, integer values are assigned to the center of a 
pixel and the coordinate [1.0, 1.0] is the center of the first pixel (bottom 
left of the image when viewed in SAO ds9).
+So the coordinate center [0.0, 0.0] is half a pixel away (in each axis) from 
the bottom left vertex of the first pixel.
+The resampling that is done in Warp (see @ref{Resampling}) is done on the 
coordinate axes and thus directly depends on the coordinate center.
+In some situations this if fine, for example when rotating/aligning a real 
image, all the edge pixels will be similarly affected.
+But in other situations (for example when scaling an over-sampled mock image 
to its intended resolution, this is not desired: you want the center of the 
coordinates to be on the corner of the pixel.
+In such cases, you can use the @option{--centeroncorner} option which will 
shift the center by @mymath{0.5} before the main warp, then shift it back by 
@mymath{-0.5} after the main warp, see below.
 
 
 @table @option
 
 @item -a
 @itemx --align
-Align the image and celestial (WCS) axes given in the input. After it, the
-vertical image direction (when viewed in SAO ds9) corresponds to the
-declination and the horizontal axis is the inverse of the Right Ascension
-(RA). The inverse of the RA is chosen so the image can correspond to what
-you would actually see on the sky and is common in most survey
-images.
+Align the image and celestial (WCS) axes given in the input.
+After it, the  vertical image direction (when viewed in SAO ds9) corresponds 
to thedeclination and the horizontal axis is the inverse of the Right Ascension 
(RA).
+The inverse of the RA is chosen so the image can correspond to what you would 
actually see on the sky and is common in most survey images.
 
-Align is internally treated just like a rotation (@option{--rotation}), but
-uses the input image's WCS to find the rotation angle. Thus, if you have
-rotated the image before calling @option{--align}, you might get unexpected
-results (because the rotation is defined on the original WCS).
+Align is internally treated just like a rotation (@option{--rotation}), but 
uses the input image's WCS to find the rotation angle.
+Thus, if you have rotated the image before calling @option{--align}, you might 
get unexpected results (because the rotation is defined on the original WCS).
 
 @item -r FLT
 @itemx --rotate=FLT
-Rotate the input image by the given angle in degrees: @mymath{\theta} in
-@ref{Warping basics}. Note that commonly, the WCS structure of the image is
-set such that the RA is the inverse of the image horizontal axis which
-increases towards the right in the FITS standard and as viewed by SAO
-ds9. So the default center for rotation is on the right of the image. If
-you want to rotate about other points, you have to translate the warping
-center first (with @option{--translate}) then apply your rotation and then
-return the center back to the original position (with another call to
-@option{--translate}, see @ref{Merging multiple warpings}.
+Rotate the input image by the given angle in degrees: @mymath{\theta} in 
@ref{Warping basics}.
+Note that commonly, the WCS structure of the image is set such that the RA is 
the inverse of the image horizontal axis which increases towards the right in 
the FITS standard and as viewed by SAO ds9.
+So the default center for rotation is on the right of the image.
+If you want to rotate about other points, you have to translate the warping 
center first (with @option{--translate}) then apply your rotation and then 
return the center back to the original position (with another call to 
@option{--translate}, see @ref{Merging multiple warpings}.
 
 @item -s FLT[,FLT]
 @itemx --scale=FLT[,FLT]
-Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in
-@ref{Warping basics}. If only one value is given, then both image axes will
-be scaled with the given value. When two values are given (separated by a
-comma), the first will be used to scale the first axis and the second will
-be used for the second axis. If you only need to scale one axis, use
-@option{1} for the axis you don't need to scale. The value(s) can also be
-written (on the command-line or in configuration files) as a fraction.
+Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in 
@ref{Warping basics}.
+If only one value is given, then both image axes will be scaled with the given 
value.
+When two values are given (separated by a comma), the first will be used to 
scale the first axis and the second will be used for the second axis.
+If you only need to scale one axis, use @option{1} for the axis you don't need 
to scale.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -f FLT[,FLT]
 @itemx --flip=FLT[,FLT]
-Flip the input image around the given axis(s). If only one value is given,
-then both image axes are flipped. When two values are given (separated by a
-comma), you can choose which axis to flip over. @option{--flip} only takes
-values @code{0} (for no flip), or @code{1} (for a flip). Hence, if you want
-to flip by the second axis only, use @option{--flip=0,1}.
+Flip the input image around the given axis(s).
+If only one value is given, then both image axes are flipped.
+When two values are given (separated by acomma), you can choose which axis to 
flip over.
+@option{--flip} only takes values @code{0} (for no flip), or @code{1} (for a 
flip).
+Hence, if you want to flip by the second axis only, use @option{--flip=0,1}.
 
 @item -e FLT[,FLT]
 @itemx --shear=FLT[,FLT]
-Shear the input image by the given value(s): @mymath{A} and @mymath{B} in
-@ref{Warping basics}. If only one value is given, then both image axes will
-be sheared with the given value. When two values are given (separated by a
-comma), the first will be used to shear the first axis and the second will
-be used for the second axis. If you only need to shear along one axis, use
-@option{0} for the axis that must be untouched. The value(s) can also be
-written (on the command-line or in configuration files) as a fraction.
+Shear the input image by the given value(s): @mymath{A} and @mymath{B} in 
@ref{Warping basics}.
+If only one value is given, then both image axes will be sheared with the 
given value.
+When two values are given (separated by a comma), the first will be used to 
shear the first axis and the second will be used for the second axis.
+If you only need to shear along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -t FLT[,FLT]
 @itemx --translate=FLT[,FLT]
-Translate (move the center of coordinates) the input image by the given
-value(s): @mymath{c} and @mymath{f} in @ref{Warping basics}. If only one
-value is given, then both image axes will be translated by the given
-value. When two values are given (separated by a comma), the first will be
-used to translate the first axis and the second will be used for the second
-axis. If you only need to translate along one axis, use @option{0} for the
-axis that must be untouched. The value(s) can also be written (on the
-command-line or in configuration files) as a fraction.
+Translate (move the center of coordinates) the input image by the given 
value(s): @mymath{c} and @mymath{f} in @ref{Warping basics}.
+If only one value is given, then both image axes will be translated by the 
given value.
+When two values are given (separated by a comma), the first will be used to 
translate the first axis and the second will be used for the second axis.
+If you only need to translate along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -p FLT[,FLT]
 @itemx --project=FLT[,FLT]
-Apply a projection to the input image by the given values(s): @mymath{g}
-and @mymath{h} in @ref{Warping basics}. If only one value is given, then
-projection will apply to both axes with the given value. When two values
-are given (separated by a comma), the first will be used to project the
-first axis and the second will be used for the second axis. If you only
-need to project along one axis, use @option{0} for the axis that must be
-untouched. The value(s) can also be written (on the command-line or in
-configuration files) as a fraction.
+Apply a projection to the input image by the given values(s): @mymath{g} and 
@mymath{h} in @ref{Warping basics}.
+If only one value is given, then projection will apply to both axes with the 
given value.
+When two values are given (separated by a comma), the first will be used to 
project the first axis and the second will be used for the second axis.
+If you only need to project along one axis, use @option{0} for the axis that 
must be untouched.
+The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -m STR
 @itemx --matrix=STR
-The warp/transformation matrix. All the elements in this matrix must be
-separated by comas(@key{,}) characters and as described above, you can also
-use fractions (a forward-slash between two numbers). The transformation
-matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 numbers)
-array. In the former case (if a 2 by 2 matrix is given), then it is put
-into a 3 by 3 matrix (see @ref{Warping basics}).
+The warp/transformation matrix.
+All the elements in this matrix must be separated by comas(@key{,}) characters 
and as described above, you can also use fractions (a forward-slash between two 
numbers).
+The transformation matrix can be either a 2 by 2 (4 numbers), or a 3 by 3 (9 
numbers) array.
+In the former case (if a 2 by 2 matrix is given), then it is put into a 3 by 3 
matrix (see @ref{Warping basics}).
 
 @cindex NaN
-The determinant of the matrix has to be non-zero and it must not
-contain any non-number values (for example infinities or NaNs). The
-elements of the matrix have to be written row by row. So for the
-general Homography matrix of @ref{Warping basics}, it should be called
-with @command{--matrix=a,b,c,d,e,f,g,h,1}.
-
-The raw matrix takes precedence over all the modular warping options listed
-above, so if it is called with any number of modular warps, the latter are
-ignored.
+The determinant of the matrix has to be non-zero and it must not contain any 
non-number values (for example infinities or NaNs).
+The elements of the matrix have to be written row by row.
+So for the general Homography matrix of @ref{Warping basics}, it should be 
called with @command{--matrix=a,b,c,d,e,f,g,h,1}.
+
+The raw matrix takes precedence over all the modular warping options listed 
above, so if it is called with any number of modular warps, the latter are 
ignored.
 
 @item -c
 @itemx --centeroncorer
-Put the center of coordinates on the corner of the first (bottom-left when
-viewed in SAO ds9) pixel. This option is applied after the final warping
-matrix has been finalized: either through modular warpings or the raw
-matrix. See the explanation above for coordinates in the FITS standard to
-better understand this option and when it should be used.
+Put the center of coordinates on the corner of the first (bottom-left when 
viewed in SAO ds9) pixel.
+This option is applied after the final warping matrix has been finalized: 
either through modular warpings or the raw matrix.
+See the explanation above for coordinates in the FITS standard to better 
understand this option and when it should be used.
 
 @item --hstartwcs=INT
-Specify the first header keyword number (line) that should be used to read
-the WCS information, see the full explanation in @ref{Invoking astcrop}.
+Specify the first header keyword number (line) that should be used to read the 
WCS information, see the full explanation in @ref{Invoking astcrop}.
 
 @item --hendwcs=INT
-Specify the last header keyword number (line) that should be used to read
-the WCS information, see the full explanation in @ref{Invoking astcrop}.
+Specify the last header keyword number (line) that should be used to read the 
WCS information, see the full explanation in @ref{Invoking astcrop}.
 
 @item -k
 @itemx --keepwcs
 @cindex WCSLIB
 @cindex World Coordinate System
-Do not correct the WCS information of the input image and save it untouched
-to the output image. By default the WCS (World Coordinate System)
-information of the input image is going to be corrected in the output image
-so the objects in the image are at the same WCS coordinates. But in some
-cases it might be useful to keep it unchanged (for example to correct
-alignments).
+Do not correct the WCS information of the input image and save it untouched to 
the output image.
+By default the WCS (World Coordinate System) information of the input image is 
going to be corrected in the output image so the objects in the image are at 
the same WCS coordinates.
+But in some cases it might be useful to keep it unchanged (for example to 
correct alignments).
 
 @item -C FLT
 @itemx --coveredfrac=FLT
-Depending on the warp, the output pixels that cover pixels on the edge of
-the input image, or blank pixels in the input image, are not going to be
-fully covered by input data. With this option, you can specify the
-acceptable covered fraction of such pixels (any value between 0 and 1). If
-you only want output pixels that are fully covered by the input image area
-(and are not blank), then you can set
-@option{--coveredfrac=1}. Alternatively, a value of @code{0} will keep
-output pixels that are even infinitesimally covered by the input(so the sum
-of the pixels in the input and output images will be the same).
+Depending on the warp, the output pixels that cover pixels on the edge of the 
input image, or blank pixels in the input image, are not going to be fully 
covered by input data.
+With this option, you can specify the acceptable covered fraction of such 
pixels (any value between 0 and 1).
+If you only want output pixels that are fully covered by the input image area 
(and are not blank), then you can set @option{--coveredfrac=1}.
+Alternatively, a value of @code{0} will keep output pixels that are even 
infinitesimally covered by the input(so the sum of the pixels in the input and 
output images will be the same).
 
 @end table
 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]