gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 9b723999: Book: copy-edits and typo correction


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 9b723999: Book: copy-edits and typo corrections
Date: Mon, 3 Oct 2022 17:27:11 -0400 (EDT)

branch: master
commit 9b723999736134c75b6fcd3fb84ddc064375b605
Author: Marjan Akbari <mrjakbari@gmail.com>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Book: copy-edits and typo corrections
    
    Until now there were several typos and minor grammatical errors in the text
    of the general tutorial.
    
    With this commit, they have been fixed.
---
 doc/gnuastro.texi | 1523 +++++++++++++++++++++++++++--------------------------
 1 file changed, 762 insertions(+), 761 deletions(-)

diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 467711c6..152f0ee6 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -214,7 +214,7 @@ And I believe that only a revival of interest in these 
riddles can save the scie
 
 @ifhtml
 To navigate easily in this web page, you can use the @code{Next}, 
@code{Previous}, @code{Up} and @code{Contents} links in the top and bottom of 
each page.
-@code{Next} and @code{Previous} will take you to the next or previous topic in 
the same level, for example from chapter 1 to chapter 2 or vice versa.
+@code{Next} and @code{Previous} will take you to the next or previous topic in 
the same level, for example, from chapter 1 to chapter 2 or vice versa.
 To go to the sections or subsections, you have to click on the menu entries 
that are there when ever a sub-component to a title is present.
 @end ifhtml
 
@@ -284,7 +284,7 @@ General program usage tutorial
 * Building custom programs with the library::  Easy way to build new programs.
 * Option management and configuration files::  Dealing with options and 
configuring them.
 * Warping to a new pixel grid::  Transforming/warping the dataset.
-* NoiseChisel and Multiextension FITS files::  Running NoiseChisel and having 
multiple HDUs.
+* NoiseChisel and Multi-Extension FITS files::  Running NoiseChisel and having 
multiple HDUs.
 * NoiseChisel optimization for detection::  Check NoiseChisel's operation and 
improve it.
 * NoiseChisel optimization for storage::  Dramatically decrease output's 
volume.
 * Segmentation and making a catalog::  Finding true peaks and creating a 
catalog.
@@ -505,17 +505,17 @@ Arithmetic
 
 Arithmetic operators
 
-* Basic mathematical operators::  For example +, -, /, log, pow, and etc.
+* Basic mathematical operators::  for example, +, -, /, log, pow, and etc.
 * Trigonometric and hyperbolic operators::  sin, cos, atan, asinh, and etc
 * Constants::                   Physical and Mathematical constants.
 * Unit conversion operators::   Various unit conversions necessary.
-* Statistical operators::       Statistics of a single dataset (for example 
mean).
+* Statistical operators::       Statistics of a single dataset (for example, 
mean).
 * Stacking operators::          Coadding or combining multiple datasets into 
one.
 * Filtering operators::         Smoothing a dataset through mixing pixel with 
neighbors.
 * Interpolation operators::     Giving blank pixels a value.
 * Dimensionality changing operators::  Collapse or expand a dataset.
 * Conditional operators::       Select certain pixels within the dataset.
-* Mathematical morphology operators::  Work on binary images, for example 
erode.
+* Mathematical morphology operators::  Work on binary images, for example, 
erode.
 * Bitwise operators::           Work on bits within one pixel.
 * Numerical type conversion operators::  Convert the numeric datatype of a 
dataset.
 * Random number generators::    Random numbers can be used to add noise for 
example.
@@ -906,7 +906,7 @@ Gnuastro is written to comply fully with the GNU coding 
standards so it integrat
 This also enables astronomers to expect a fully familiar experience in the 
source code, building, installing and command-line user interaction that they 
have seen in all the other GNU software that they use.
 The official and always up to date version of this book (or manual) is freely 
available under @ref{GNU Free Doc. License} in various formats (PDF, HTML, 
plain text, info, and as its Texinfo source) at 
@url{http://www.gnu.org/software/gnuastro/manual/}.
 
-For users who are new to the GNU/Linux environment, unless otherwise specified 
most of the topics in @ref{Installation} and @ref{Common program behavior} are 
common to all GNU software, for example installation, managing command-line 
options or getting help (also see @ref{New to GNU/Linux?}).
+For users who are new to the GNU/Linux environment, unless otherwise specified 
most of the topics in @ref{Installation} and @ref{Common program behavior} are 
common to all GNU software, for example, installation, managing command-line 
options or getting help (also see @ref{New to GNU/Linux?}).
 So if you are new to this empowering environment, we encourage you to go 
through these chapters carefully.
 They can be a starting point from which you can continue to learn more from 
each program's own manual and fully benefit from and enjoy this wonderful 
environment.
 Gnuastro also comes with a large set of libraries, so you can write your own 
programs using Gnuastro's building blocks, see @ref{Review of library 
fundamentals} for an introduction.
@@ -985,7 +985,7 @@ $ sudo make install
 @c The last command is to enable Gnuastro's custom TAB completion in Bash.
 @c For more on this useful feature, see @ref{Shell TAB completion}).
 
-For each program there is an `Invoke ProgramName' sub-section in this book 
which explains how the programs should be run on the command-line (for example 
@ref{Invoking asttable}).
+For each program there is an `Invoke ProgramName' sub-section in this book 
which explains how the programs should be run on the command-line (for example, 
see @ref{Invoking asttable}).
 
 In @ref{Tutorials}, we have prepared some complete tutorials with common 
Gnuastro usage scenarios in astronomical research.
 They even contain links to download the necessary data, and thoroughly 
describe every step of the process (the science, statistics and optimal usage 
of the command-line).
@@ -1033,7 +1033,7 @@ This kind of subjective experience is prone to serious 
misunderstandings about t
 This attitude is further encouraged through non-free 
software@footnote{@url{https://www.gnu.org/philosophy/free-sw.html}}, poorly 
written (or non-existent) scientific software manuals, and non-reproducible 
papers@footnote{Where the authors omit many of the analysis/processing 
``details'' from the paper by arguing that they would make the paper too 
long/unreadable.
 However, software engineers have been dealing with such issues for a long time.
 There are thus software management solutions that allow us to supplement 
papers with all the details necessary to exactly reproduce the result.
-For example see Akhlaghi et al. (2021, 
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).}.
+For example, see Akhlaghi et al. (2021, 
@url{https://arxiv.org/abs/2006.03018,arXiv:2006.03018}).}.
 This approach to scientific software and methods only helps in producing 
dogmas and an ``@emph{obscurantist faith in the expert's special skill, and in 
his personal knowledge and authority}''@footnote{Karl Popper. The logic of 
scientific discovery. 1959.
 Larger quote is given at the start of the PDF (for print) version of this 
book.}.
 
@@ -1046,7 +1046,7 @@ Choose the latter, and it could be the last real choice 
you get to make.
 @end quotation
 
 It is obviously impractical for any one human being to gain the intricate 
knowledge explained above for every step of an analysis.
-On the other hand, scientific data can be large and numerous, for example 
images produced by telescopes in astronomy.
+On the other hand, scientific data can be large and numerous, for example, 
images produced by telescopes in astronomy.
 This requires efficient algorithms.
 To make things worse, natural scientists have generally not been trained in 
the advanced software techniques, paradigms and architecture that are taught in 
computer science or engineering courses and thus used in most software.
 The GNU Astronomy Utilities are an effort to tackle this issue.
@@ -1200,8 +1200,8 @@ The concept behind it was created after several design 
iterations with Mohammad
 @cindex Program names
 Gnuastro is a package of independent programs and a collection of libraries, 
here we are mainly concerned with the programs.
 Each program has an official name which consists of one or two words, 
describing what they do.
-The latter are printed with no space, for example NoiseChisel or Crop.
-On the command-line, you can run them with their executable names which start 
with an @file{ast} and might be an abbreviation of the official name, for 
example @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
+The latter are printed with no space, for example, NoiseChisel or Crop.
+On the command-line, you can run them with their executable names which start 
with an @file{ast} and might be an abbreviation of the official name, for 
example, @file{astnoisechisel} or @file{astcrop}, see @ref{Executable names}.
 
 @pindex ProgramName
 @pindex @file{astprogname}
@@ -1222,7 +1222,7 @@ That list also contains the executable names and version 
numbers along with a on
 @cindex Mailing list: info-gnuastro
 Gnuastro can have two formats of version numbers, for official and unofficial 
releases.
 Official Gnuastro releases are announced on the @command{info-gnuastro} 
mailing list, they have a version control tag in Gnuastro's development 
history, and their version numbers are formatted like ``@file{A.B}''.
-@file{A} is a major version number, marking a significant planned achievement 
(for example see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor 
version number, see below for more on the distinction.
+@file{A} is a major version number, marking a significant planned achievement 
(for example, see @ref{GNU Astronomy Utilities 1.0}), while @file{B} is a minor 
version number, see below for more on the distinction.
 Note that the numbers are not decimals, so version 2.34 is much more recent 
than version 2.5, which is not equal to 2.50.
 
 Gnuastro also allows a unique version number for unofficial releases.
@@ -1242,7 +1242,7 @@ Therefore, the unofficial version number 
`@code{3.92.8-29c8}', corresponds to th
 The unofficial version number is sort-able (unlike the raw hash) and as shown 
above is descriptive of the state of the unofficial release.
 Of course an official release is preferred for publication (since its tarballs 
are easily available and it has gone through more tests, making it more 
stable), so if an official release is announced prior to your publication's 
final review, please consider updating to the official release.
 
-The major version number is set by a major goal which is defined by the 
developers and user community before hand, for example see @ref{GNU Astronomy 
Utilities 1.0}.
+The major version number is set by a major goal which is defined by the 
developers and user community before hand, for example, see @ref{GNU Astronomy 
Utilities 1.0}.
 The incremental work done in minor releases are commonly small steps in 
achieving the major goal.
 Therefore, there is no limit on the number of minor releases and the 
difference between the (hypothetical) versions 2.927 and 3.0 can be a small 
(negligible to the user) improvement that finalizes the defined goals.
 
@@ -1266,7 +1266,7 @@ This will simultaneously improve performance and 
transparency.
 Shell scripting (or Makefiles) are also basic constructs that are easy to 
learn and readily available as part of the Unix-like operating systems.
 If there is no program to do a desired step, Gnuastro's libraries can be used 
to build specific programs.
 
-The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it does not belong to 
any particular one of them), much like how for-profit organizations (for 
example RedHat, or Intel and many others) are major contributors to free and 
open source software for their shared benefit.
+The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it does not belong to 
any particular one of them), much like how for-profit organizations (for 
example, RedHat, or Intel and many others) are major contributors to free and 
open source software for their shared benefit.
 Gnuastro's copyright has been fully awarded to GNU, so it does not belong to 
any particular astronomer or astronomical facility or project.
 
 
@@ -1317,7 +1317,7 @@ Recorded talk: 
@url{https://peertube.stream/w/ddeSSm33R1eFWKJVqpcthN} (first 20
 In short, the Linux kernel@footnote{In Unix-like operating systems, the kernel 
connects software and hardware worlds.} is built using the GNU C library 
(glibc) and GNU compiler collection (gcc).
 The Linux kernel software alone is just a means for other software to access 
the hardware resources, it is useless alone!
 A normal astronomer (or scientist) will never interact with the kernel 
directly!
-For example the command-line environment that you interact with is usually GNU 
Bash.
+for example, the command-line environment that you interact with is usually 
GNU Bash.
 It is GNU Bash that then talks to kernel.
 
 To better clarify, let's use this analogy inspired from one of the links 
above@footnote{https://www.gnu.org/gnu/gnu-users-never-heard-of-gnu.html}: 
saying that you are ``running Linux'' is like saying you are ``driving your 
engine''.
@@ -1331,8 +1331,8 @@ For the Linux kernel, both the lower-level and 
higher-level tools are GNU.
 In other words,``the whole system is basically GNU with Linux loaded''.
 
 You can replace the Linux kernel and still have the GNU shell and higher-level 
utilities.
-For example using the ``Windows Subsystem for Linux'', you can use almost all 
GNU tools without the original Linux kernel, but using the host Windows 
operating system, as in @url{https://ubuntu.com/wsl}.
-Alternatively, you can build a fully functional GNU-based working environment 
on a macOS or BSD-based operating system (using the host's kernel and C 
compiler), for example through projects like Maneage (see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. 2021}, and its Appendix 
C with all the GNU software tools that is exactly reproducible on a macOS also).
+for example, using the ``Windows Subsystem for Linux'', you can use almost all 
GNU tools without the original Linux kernel, but using the host Windows 
operating system, as in @url{https://ubuntu.com/wsl}.
+Alternatively, you can build a fully functional GNU-based working environment 
on a macOS or BSD-based operating system (using the host's kernel and C 
compiler), for example, through projects like Maneage (see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. 2021}, and its Appendix 
C with all the GNU software tools that is exactly reproducible on a macOS also).
 
 Therefore to acknowledge GNU's instrumental role in the creation and usage of 
the Linux kernel and the operating systems that use it, we should call these 
operating systems ``GNU/Linux''.
 
@@ -1357,7 +1357,7 @@ Here we hope to convince you of the unique benefits of 
this interface which can
 Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based 
operating systems now have an advanced and useful GUI.
 Since the GUI was created long after the command-line, some wrongly consider 
the command line to be obsolete.
 Both interfaces are useful for different tasks.
-For example you cannot view an image, video, pdf document or web page on the 
command-line.
+for example, you cannot view an image, video, pdf document or web page on the 
command-line.
 On the other hand you cannot reproduce your results easily in the GUI.
 Therefore they should not be regarded as rivals but as complementary user 
interfaces, here we will outline how the CLI can be useful in scientific 
programs.
 
@@ -1377,7 +1377,7 @@ Unless the designers of a particular program decided to 
design such a system for
 @cindex GNU Bash
 @cindex Reproducible results
 @cindex CLI: repeating operations
-On the command-line, you can run any series of actions which can come from 
various CLI capable programs you have decided yourself in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example see the tutorials in @ref{Tutorials}.}.
+On the command-line, you can run any series of actions which can come from 
various CLI capable programs you have decided yourself in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example, see the tutorials in @ref{Tutorials}.}.
 This allows for much more creativity and exact reproducibility that is not 
possible to a GUI user.
 For technical and scientific operations, where the same operation (using 
various programs) has to be done on a large set of data files, this is 
crucially important.
 It also allows exact reproducibility which is a foundation principle for 
scientific results.
@@ -1464,7 +1464,7 @@ This will significantly help us in managing and resolving 
them sooner.
 @item Reproducible bug reports
 If we cannot exactly reproduce your bug, then it is very hard to resolve it.
 So please send us a Minimal working 
example@footnote{@url{http://en.wikipedia.org/wiki/Minimal_Working_Example}} 
along with the description.
-For example in running a program, please send us the full command-line text 
and the output with the @option{-P} option, see @ref{Operating mode options}.
+for example, in running a program, please send us the full command-line text 
and the output with the @option{-P} option, see @ref{Operating mode options}.
 If it is caused only for a certain input, also send us that input file.
 In case the input FITS is large, please use Crop to only crop the problematic 
section and make it as small as possible so it can easily be uploaded and 
downloaded and not waste the archive's storage, see @ref{Crop}.
 @end table
@@ -1807,8 +1807,8 @@ But they need to be as realistic as possible, so this 
tutorial is dedicated to t
 
 In these tutorials, we have intentionally avoided too many cross references to 
make it more easy to read.
 For more information about a particular program, you can visit the section 
with the same name as the program in this book.
-Each program section in the subsequent chapters starts by explaining the 
general concepts behind what it does, for example see @ref{Convolve}.
-If you only want practical information on running a program, for example its 
options/configuration, input(s) and output(s), please consult the subsection 
titled ``Invoking ProgramName'', for example see @ref{Invoking astnoisechisel}.
+Each program section in the subsequent chapters starts by explaining the 
general concepts behind what it does, for example, see @ref{Convolve}.
+If you only want practical information on running a program, for example, its 
options/configuration, input(s) and output(s), please consult the subsection 
titled ``Invoking ProgramName'', for example, see @ref{Invoking astnoisechisel}.
 For an explanation of the conventions we use in the example codes through the 
book, please see @ref{Conventions}.
 
 @menu
@@ -1861,7 +1861,7 @@ This will help simulate future situations when you are 
processing your own datas
 * Building custom programs with the library::  Easy way to build new programs.
 * Option management and configuration files::  Dealing with options and 
configuring them.
 * Warping to a new pixel grid::  Transforming/warping the dataset.
-* NoiseChisel and Multiextension FITS files::  Running NoiseChisel and having 
multiple HDUs.
+* NoiseChisel and Multi-Extension FITS files::  Running NoiseChisel and having 
multiple HDUs.
 * NoiseChisel optimization for detection::  Check NoiseChisel's operation and 
improve it.
 * NoiseChisel optimization for storage::  Dramatically decrease output's 
volume.
 * Segmentation and making a catalog::  Finding true peaks and creating a 
catalog.
@@ -1920,7 +1920,7 @@ You can immediately go to the next link in the page with 
the @key{<TAB>} key and
 Try the commands above again, but this time also use @key{<TAB>} to go to the 
links and press @key{<ENTER>} on them to go to the respective section of the 
book.
 Then follow a few more links and go deeper into the book.
 To return to the previous page, press @key{l} (small L).
-If you are searching for a specific phrase in the whole book (for example an 
option name), press @key{s} and type your search phrase and end it with an 
@key{<ENTER>}.
+If you are searching for a specific phrase in the whole book (for example, an 
option name), press @key{s} and type your search phrase and end it with an 
@key{<ENTER>}.
 
 You do not need to start from the top of the manual every time.
 For example, to get to @ref{Invoking astnoisechisel}, run the following 
command.
@@ -1955,7 +1955,7 @@ As a good scientist you need to feel comfortable to play 
with the features/optio
 On the other hand, our human memory is limited, so it is important to be able 
to easily access any part of this book fast and remember the option names, what 
they do and their acceptable values.
 
 If you just want the option names and a short description, calling the program 
with the @option{--help} option might also be a good solution like the first 
example below.
-If you know a few characters of the option name, you can feed the output to 
@command{grep} like the second or third example commands.
+If you know a few characters of the option name, you can feed the printed 
output to @command{grep} like the second or third example commands.
 
 @example
 $ astnoisechisel --help
@@ -1975,7 +1975,7 @@ $ cd gnuastro-tutorial
 @end example
 
 We will be using the near infra-red @url{http://www.stsci.edu/hst/wfc3, Wide 
Field Camera} dataset.
-If you already have them in another directory (for example @file{XDFDIR}, with 
the same FITS file names), you can set the @file{download} directory to be a 
symbolic link to @file{XDFDIR} with a command like this:
+If you already have them in another directory (for example, @file{XDFDIR}, 
with the same FITS file names), you can set the @file{download} directory to be 
a symbolic link to @file{XDFDIR} with a command like this:
 
 @example
 $ ln -s XDFDIR download
@@ -2030,20 +2030,20 @@ $ astscript-fits-view \
 @end example
 
 After running this command, you will see that the DS9 window fully covers the 
height of your monitor, it is showing the whole image, using a more clear 
color-map, and many more useful things.
-Infact, you see the DS9 command it used in your terminal@footnote{When 
comparing DS9's command-line options to Gnuastro's, you will notice how SAO DS9 
does not follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example 
@option{--width} and @option{-w}, see @ref{Options}).}.
+Infact, you see the DS9 command that is used in your terminal@footnote{When 
comparing DS9's command-line options to Gnuastro's, you will notice how SAO DS9 
does not follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example, 
@option{--width} and @option{-w}, see @ref{Options}).}.
 On GNU/Linux operating systems (like Ubuntu, Fedora and etc), you can also set 
your graphics user interface to use this script for opening FITS files when you 
click on them.
 For more, see the instructions in the checklist at the start of @ref{Invoking 
astscript-fits-view}.
 
 As you hover your mouse over the image, notice how the ``Value'' and 
positional fields on the top of the ds9 window get updated.
 The first thing you might notice is that when you hover the mouse over the 
regions with no data, they have a value of zero.
-The next thing might be that the dataset actually has two ``depth''s (see 
@ref{Quantifying measurement limits}).
+The next thing might be that the dataset has a shallower and deeper component 
(see @ref{Quantifying measurement limits}).
 Recall that this is a combined/reduced image of many exposures, and the parts 
that have more exposures are deeper.
 In particular, the exposure time of the deep inner region is more than 4 times 
the exposure time of the outer (more shallower) parts.
 
 To simplify the analysis in this tutorial, we will only be working on the deep 
field, so let's crop it out of the full dataset.
 Fortunately the XDF survey web page (above) contains the vertices of the deep 
flat WFC3-IR 
field@footnote{@url{https://archive.stsci.edu/prepds/xdf/#dataproducts}}.
 With Gnuastro's Crop program@footnote{To learn more about the crop program see 
@ref{Crop}.}, you can use those vertices to cutout this deep region from the 
larger image.
-But before that, to keep things organized, let's make a directory called 
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with 
a `@file{xdf-}' suffix for a shorter and easier filename).
+But before that, to keep things organized, let's make a directory called 
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with 
a `@file{xdf-}' prefix for a shorter and easier filename).
 
 @example
 $ mkdir flat-ir
@@ -2079,11 +2079,11 @@ However, do you remember that in the downloaded files, 
such regions had a value
 That is a big problem!
 Because zero is a number, and is thus meaningful, especially when you later 
want to NoiseChisel to detect@footnote{As you will see below, unlike most other 
detection algorithms, NoiseChisel detects the objects from their faintest 
parts, it does not start with their high signal-to-noise ratio peaks.
 Since the Sky is already subtracted in many images and noise fluctuates around 
zero, zero is commonly higher than the initial threshold applied.
-Therefore not ignoring zero-valued pixels in this image, will cause them to 
part of the detections!} all the signal from the deep universe in this image.
+Therefore keeping zero-valued pixels in this image will cause them to 
identified as part of the detections!} all the signal from the deep universe in 
this image.
 Generally, when you want to ignore some pixels in a dataset, and avoid 
higher-level ambiguities or complications, it is always best to give them blank 
values (not zero, or some other absurdly large or small number).
 Gnuastro has the Arithmetic program for such cases, and we will introduce it 
later in this tutorial.
 
-In the example above, the polygon vertices are in degrees, but you can also 
replace them with sexagesimal coordinates (for example using 
@code{03h32m44.9794} or @code{03:32:44.9794} instead of @code{53.187414} 
instead of the first RA and @code{-27d46m44.9472} or @code{-27:46:44.9472} 
instead of the first Dec).
+In the example above, the polygon vertices are in degrees, but you can also 
replace them with 
sexagesimal@footnote{@url{https://en.wikipedia.org/wiki/Sexagesimal}} 
coordinates (for example, using @code{03h32m44.9794} or @code{03:32:44.9794} 
instead of @code{53.187414} instead of the first RA and @code{-27d46m44.9472} 
or @code{-27:46:44.9472} instead of the first Dec).
 To further simplify things, you can even define your polygon visually as a DS9 
``region'', save it as a ``region file'' and give that file to crop.
 But we need to continue, so if you are interested to learn more, see 
@ref{Crop}.
 
@@ -2130,7 +2130,7 @@ astfits flat-ir/xdf-f160w.fits --skycoverage --quiet
 
 The second row is the coverage range along RA and Dec (compare with the 
outputs before using @option{--quiet}).
 We can thus simply subtract the second from the first column and multiply it 
with the difference of the fourth and third columns to calculate the image area.
-We Will also multiply each by 60 to have the area in arc-minutes squared.
+We will also multiply each by 60 to have the area in arc-minutes squared.
 
 @example
 astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
@@ -2138,7 +2138,7 @@ astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
 @end example
 
 The returned value is @mymath{9.06711} arcmin@mymath{^2}.
-@strong{However, this method ignores the fact many of the image pixels are 
blank!}
+@strong{However, this method ignores the fact that many of the image pixels 
are blank!}
 In other words, the image does cover this area, but there is no data in more 
than half of the pixels.
 So let's calculate the area coverage over-which we actually have data.
 
@@ -2161,7 +2161,7 @@ When @code{CDELT} keywords are not present, the @code{PC} 
matrix is assumed to c
 Note that not all FITS writers use the @code{CDELT} convention.
 So you might not find the @code{CDELT} keywords in the WCS meta data of some 
FITS files.
 However, all Gnuastro programs (which use the default FITS keyword writing 
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even 
if the input does not have it.
-If your dataset does not use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example Arithmetic) and the output will have 
the @code{CDELT} keyword.
+If your dataset does not use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example, Arithmetic) and the output will 
have the @code{CDELT} keyword.
 See Section 8 of the 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS 
standard} for more}.
 
 With the commands below, we will use @code{CDELT} (along with the number of 
non-blank pixels) to find the answer of our initial question: ``how much of the 
sky does this image cover?''.
@@ -2178,7 +2178,7 @@ Press the ``up'' key on your keyboard (possibly multiple 
times) to see your prev
 
 @cartouche
 @noindent
-@strong{Your locale does not use `.' as decimal separator:} on systems that do 
not use an English language environment, dates, numbers and etc can be printed 
in different formats (for example `0.5' can be written as `0,5': with a comma).
+@strong{Your locale does not use `.' as decimal separator:} on systems that do 
not use an English language environment, dates, numbers and etc can be printed 
in different formats (for example, `0.5' can be written as `0,5': with a comma).
 With the @code{LC_NUMERIC} line at the start of the script below, we are 
ensuring a unified format in the output of @command{seq}.
 For more, please see @ref{Numeric locale}.
 @end cartouche
@@ -2307,7 +2307,7 @@ $ for z in 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0; do    
    \
 @noindent
 Fortunately, the shell has a useful tool/program to print a sequence of 
numbers that is nicely called @code{seq} (short for ``sequence'').
 You can use it instead of typing all the different redshifts in the loop above.
-For example the loop below will calculate and print the tangential coverage of 
this field across a larger range of redshifts (0.1 to 5) and with finer 
increments of 0.1.
+for example, the loop below will calculate and print the tangential coverage 
of this field across a larger range of redshifts (0.1 to 5) and with finer 
increments of 0.1.
 For more on the @code{LC_NUMERIC} command, see @ref{Numeric locale}.
 
 @example
@@ -2451,7 +2451,7 @@ Gnuastro's library and BuildProgram are created to make 
it easy for you to use t
 This gives you a high level of creativity, while also providing efficiency and 
robustness.
 Several other complete working examples (involving images and tables) of 
Gnuastro's libraries can be see in @ref{Library demo programs}.
 
-But for this tutorial, let's stop discussing the libraries at this point in 
and get back to Gnuastro's already built programs which do not need any 
programming.
+But for this tutorial, let's stop discussing the libraries here and get back 
to Gnuastro's already built programs (which do not need C programming).
 But before continuing, let's clean up the files we do not need any more:
 
 @example
@@ -2462,7 +2462,7 @@ $ rm myprogram* z-vs-tandist*
 @node Option management and configuration files, Warping to a new pixel grid, 
Building custom programs with the library, General program usage tutorial
 @subsection Option management and configuration files
 In the previous section (@ref{Cosmological coverage and visualizing tables}), 
when you ran CosmicCalculator, you only specified the redshfit with 
@option{-z2} option.
-You did not specifying the cosmological parameters that are necessary for the 
calculations!
+You did not specify the cosmological parameters that are necessary for the 
calculations!
 Parameters like the Hubble constant (@mymath{H_0}), the matter density, etc.
 In spite of this, CosmicCalculator done its processing and printed results.
 
@@ -2488,7 +2488,7 @@ $ astcosmiccal --help
 
 After running the command above, please scroll to the line that you ran this 
command and read through the output (its the same format for all the programs).
 All options have a long format (starting with @code{--} and a multi-character 
name) and some have a short format (starting with @code{-} and a single 
character), for more see @ref{Options}.
-The options that expect a value have an @key{=} sign after their long version.
+The options that expect a value, have an @key{=} sign after their long version.
 The format of their expected value is also shown as @code{FLT}, @code{INT} or 
@code{STR} for floating point numbers, integer numbers, and strings (filenames 
for example) respectively.
 
 You can see the values of all options that need one with the 
@option{--printparams} option (or its short format: @code{-P}).
@@ -2576,7 +2576,7 @@ Their file name (within @file{.gnuastro}) must also be 
the same as the program's
 So in the case of CosmicCalculator, the default configuration file in a given 
directory is @file{.gnuastro/astcosmiccal.conf}.
 
 Let's do this.
-We Will first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
+We will first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
 Finally, we will copy the custom configuration file there:
 
 @example
@@ -2596,7 +2596,7 @@ $ astcosmiccal -P       # The default cosmology is 
printed.
 
 To further simplify the process, you can use the @option{--setdirconf} option.
 If you are already in your desired working directory, calling this option with 
the others will automatically write the final values (along with descriptions) 
in @file{.gnuastro/astcosmiccal.conf}.
-For example try the commands below:
+for example, try the commands below:
 
 @example
 $ mkdir my-cosmology2
@@ -2613,7 +2613,7 @@ Only the directory and filename differ from the above, 
the rest of the process i
 Finally, there are also system-wide configuration files that can be used to 
define the option values for all users on a system.
 See @ref{Configuration file precedence} for a more detailed discussion.
 
-We Will stop the discussion on configuration files here, but you can always 
read about them in @ref{Configuration files}.
+We will stop the discussion on configuration files here, but you can always 
read about them in @ref{Configuration files}.
 Before continuing the tutorial, let's delete the two extra directories that we 
do not need any more:
 
 @example
@@ -2621,7 +2621,7 @@ $ rm -rf my-cosmology*
 @end example
 
 
-@node Warping to a new pixel grid, NoiseChisel and Multiextension FITS files, 
Option management and configuration files, General program usage tutorial
+@node Warping to a new pixel grid, NoiseChisel and Multi-Extension FITS files, 
Option management and configuration files, General program usage tutorial
 @subsection Warping to a new pixel grid
 We are now ready to start processing the downloaded images.
 The XDF datasets we are using here are already aligned to the same pixel grid.
@@ -2645,7 +2645,7 @@ $ astwarp xdf-f160w_rotated.fits --align
 @end example
 
 Warp can generally be used for many kinds of pixel grid manipulation 
(warping), not just rotations.
-For example the outputs of the commands below will have larger pixels 
respectively (new resolution being one quarter the original resolution), get 
shifted by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
+for example, the outputs of the commands below will have larger pixels 
respectively (new resolution being one quarter the original resolution), get 
shifted by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
 Run each of them and open the output file to see the effect, they will become 
handy for you in the future.
 
 @example
@@ -2657,7 +2657,7 @@ $ astwarp flat-ir/xdf-f160w.fits --project=0.001,0.0005
 
 @noindent
 If you need to do multiple warps, you can combine them in one call to Warp.
-For example to first rotate the image, then scale it, run this command:
+for example, to first rotate the image, then scale it, run this command:
 
 @example
 $ astwarp flat-ir/xdf-f160w.fits --rotate=20 --scale=0.25
@@ -2680,8 +2680,8 @@ $ rm *.fits
 @end example
 
 
-@node NoiseChisel and Multiextension FITS files, NoiseChisel optimization for 
detection, Warping to a new pixel grid, General program usage tutorial
-@subsection NoiseChisel and Multiextension FITS files
+@node NoiseChisel and Multi-Extension FITS files, NoiseChisel optimization for 
detection, Warping to a new pixel grid, General program usage tutorial
+@subsection NoiseChisel and Multi-Extension FITS files
 In the previous sections, we commpleted a review of the basics of Gnuastro's 
programs.
 We are now ready to do some more serious analysis on the downloaded images: 
extract the pixels containing signal from the image, find sub-structure of the 
extracted signal, do measurements over the extracted objects and analyze them 
(finding certain objects of interest in the image).
 
@@ -2765,9 +2765,9 @@ $ astscript-fits-view xdf-f160w_detected.fits
 
 A ``cube'' window opens along with DS9's main window.
 The buttons and horizontal scroll bar in this small new window can be used to 
navigate between the extensions.
-In this mode, all DS9's settings (for example zoom or color-bar) will be 
identical between the extensions.
+In this mode, all DS9's settings (for example, zoom or color-bar) will be 
identical between the extensions.
 Try zooming into one part and flipping through the extensions to see how the 
galaxies were detected along with the Sky and Sky standard deviation values for 
that region.
-Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We Will do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
+Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We will do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
 
 The second extension of NoiseChisel's output (numbered 1, named 
@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
 The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary 
image with only two possible values for all pixels: 0 for noise and 1 for 
signal.
@@ -2786,9 +2786,9 @@ See @ref{HDU information and manipulation} for more.
 
 
 
-@node NoiseChisel optimization for detection, NoiseChisel optimization for 
storage, NoiseChisel and Multiextension FITS files, General program usage 
tutorial
+@node NoiseChisel optimization for detection, NoiseChisel optimization for 
storage, NoiseChisel and Multi-Extension FITS files, General program usage 
tutorial
 @subsection NoiseChisel optimization for detection
-In @ref{NoiseChisel and Multiextension FITS files}, we ran NoiseChisel and 
reviewed NoiseChisel's output format.
+In @ref{NoiseChisel and Multi-Extension FITS files}, we ran NoiseChisel and 
reviewed NoiseChisel's output format.
 Now that you have a better feeling for multi-extension FITS files, let's 
optimize NoiseChisel for this particular dataset.
 
 One good way to see if you have missed any signal (small galaxies, or the 
wings of brighter galaxies) is to mask all the detected pixels and inspect the 
noise pixels.
@@ -2824,7 +2824,7 @@ Now you can set the ``Center'' coordinates of the region 
(@code{1650} and @code{
 Finally, when everything is ready, click on the ``Apply'' button and your 
region will go over your requested coordinates.
 You can zoom out (to see the whole image) and visually find it.} pixel 1650 
(X) and 1470 (Y).
 These types of long wiggly structures show that we have dug too deep into the 
noise, and are a signature of correlated noise.
-Correlated noise is created when we warp (for example rotate) individual 
exposures (that are each slightly offset compared to each other) into the same 
pixel grid before adding them into one deeper image.
+Correlated noise is created when we warp (for example, rotate) individual 
exposures (that are each slightly offset compared to each other) into the same 
pixel grid before adding them into one deeper image.
 During the warping, nearby pixels are mixed and the effect of this mixing on 
the noise (which is in every pixel) is called ``correlated noise''.
 Correlated noise is a form of convolution and it slightly smooths the image.
 
@@ -2866,7 +2866,7 @@ $ astfits xdf-f160w_detcheck.fits
 $ ds9 -mecube xdf-f160w_detcheck.fits -zscale -zoom to fit
 @end example
 
-In order to understand the parameters and their biases (especially as you are 
starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} 
encouraged to play with the different parameters and use the respective check 
images to see which step is affected by your changes and how, for example see 
@ref{Detecting large extended targets}.
+In order to understand the parameters and their biases (especially as you are 
starting to use Gnuastro, or running it a new dataset), it is @emph{strongly} 
encouraged to play with the different parameters and use the respective check 
images to see which step is affected by your changes and how, for example, see 
@ref{Detecting large extended targets}.
 
 @cindex FWHM
 Let's focus on one step: the @code{OPENED_AND_LABELED} extension shows the 
initial detection step of NoiseChisel.
@@ -2916,7 +2916,7 @@ Open the check image now and have a look, you can see how 
the pseudo-detections
 
 @cartouche
 @noindent
-@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on 
datasets with a new noise-pattern (for example going to a Radio astronomy 
image, or a shallow ground-based image), play with @code{--dthresh} until you 
get a maximal number of pseudo-detections: the total number of 
pseudo-detections is printed on the command-line when you run NoiseChisel, you 
do not even need to open a FITS viewer.
+@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on 
datasets with a new noise-pattern (for example, going to a Radio astronomy 
image, or a shallow ground-based image), play with @code{--dthresh} until you 
get a maximal number of pseudo-detections: the total number of 
pseudo-detections is printed on the command-line when you run NoiseChisel, you 
do not even need to open a FITS viewer.
 
 In this particular case, try @option{--dthresh=0.2} and you will see that the 
total printed number decreases to around 6700 (recall that with 
@option{--dthresh=0.1}, it was roughly 7100).
 So for this type of very deep HST images, we should set @option{--dthresh=0.1}.
@@ -2965,7 +2965,7 @@ The correlated noise is again visible in the 
signal-to-noise distribution of sky
 Do you see how skewed this distribution is?
 In an image with less correlated noise, this distribution would be much more 
symmetric.
 A small change in the quantile will translate into a big change in the S/N 
value.
-For example see the difference between the three 0.99, 0.95 and 0.90 quantiles 
with this command:
+for example, see the difference between the three 0.99, 0.95 and 0.90 
quantiles with this command:
 
 @example
 $ aststatistics xdf-f160w_detsn.fits -hSKY_PSEUDODET_SN -c2      \
@@ -2987,7 +2987,7 @@ $ astarithmetic $in $det nan where --output=mask-det.fits
 @end example
 
 Overall it seems good, but if you play a little with the color-bar and look 
closer in the noise, you'll see a few very sharp, but faint, objects that have 
not been detected.
-For example the object around pixel (456, 1662).
+for example, the object around pixel (456, 1662).
 Despite its high valued pixels, this object was lost because erosion ignores 
the precise pixel values.
 Losing small/sharp objects like this only happens for under-sampled datasets 
like HST (where the pixel size is larger than the point spread function FWHM).
 So this will not happen on ground-based images.
@@ -3002,15 +3002,15 @@ $ astnoisechisel flat-ir/xdf-f160w.fits 
--kernel=kernel.fits     \
 @end example
 
 This seems to be fine and the object above is now detected.
-We Will stop the configuration of NoiseChisel here, but please feel free to 
keep looking into the data to see if you can improve it even more.
+We will stop editing the configuration of NoiseChisel here, but please feel 
free to keep looking into the data to see if you can improve it even more.
 
-Once you have found the proper customization for the type of images you will 
be using you do not need to change them any more.
+Once you have found the proper configuration for the type of images you will 
be using you do not need to change them any more.
 The same configuration can be used for any dataset that has been similarly 
produced (and has a similar noise pattern).
 But entering all these options on every call to NoiseChisel is annoying and 
prone to bugs (mistakenly typing the wrong value for example).
 To simplify things, we will make a configuration file in a visible 
@file{config} directory.
 Then we will define the hidden @file{.gnuastro} directory (that all Gnuastro's 
programs will look into for configuration files) as a symbolic link to the 
@file{config} directory.
 Finally, we will write the finalized values of the options into NoiseChisel's 
standard configuration file within that directory.
-We Will also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
+We will also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
 
 @example
 $ mkdir kernel config
@@ -3036,7 +3036,7 @@ $ for f in f105w f125w f160w; do \
 @node NoiseChisel optimization for storage, Segmentation and making a catalog, 
NoiseChisel optimization for detection, General program usage tutorial
 @subsection NoiseChisel optimization for storage
 
-As we showed before (in @ref{NoiseChisel and Multiextension FITS files}), 
NoiseChisel's output is a multi-extension FITS file with several images the 
same size as the input.
+As we showed before (in @ref{NoiseChisel and Multi-Extension FITS files}), 
NoiseChisel's output is a multi-extension FITS file with several images the 
same size as the input.
 As the input datasets get larger this output can become hard to manage and 
waste a lot of storage space.
 Fortunately there is a solution to this problem (which is also useful for 
Segment's outputs).
 
@@ -3080,7 +3080,7 @@ We can get this wonderful level of compression because 
NoiseChisel's output is b
 Compression algorithms are highly optimized in such scenarios.
 
 You can open @file{nc-for-storage.fits.gz} directly in SAO DS9 or feed it to 
any of Gnuastro's programs without having to decompress it.
-Higher-level programs that take NoiseChisel's output (for example Segment or 
MakeCatalog) can also deal with this compressed image where the Sky and its 
Standard deviation are one pixel-per-tile.
+Higher-level programs that take NoiseChisel's output (for example, Segment or 
MakeCatalog) can also deal with this compressed image where the Sky and its 
Standard deviation are one pixel-per-tile.
 You just have to give the ``values'' image as a separate option, for more, see 
@ref{Segment} and @ref{MakeCatalog}.
 
 Segment (the program we will introduce in the next section for identifying 
sub-structure), also has similar features to optimize its output for storage.
@@ -3107,7 +3107,7 @@ $ astsegment nc/xdf-f105w.fits -oseg/xdf-f105w.fits
 @end example
 
 Segment's operation is very much like NoiseChisel (in fact, prior to version 
0.6, it was part of NoiseChisel).
-For example the output is a multi-extension FITS file (previously discussed in 
@ref{NoiseChisel and Multiextension FITS files}), it has check images and uses 
the undetected regions as a reference (previously discussed in @ref{NoiseChisel 
optimization for detection}).
+for example, the output is a multi-extension FITS file (previously discussed 
in @ref{NoiseChisel and Multi-Extension FITS files}), it has check images and 
uses the undetected regions as a reference (previously discussed in 
@ref{NoiseChisel optimization for detection}).
 Please have a look at Segment's multi-extension output to get a good feeling 
of what it has done.
 Do Not forget to flip through the extensions in the ``Cube'' window.
 
@@ -3139,6 +3139,7 @@ First of all, we need to know which measurement belongs 
to which object or clump
 We also want to measure (in this order) the Right Ascension (with 
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), 
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
 Furthermore, as mentioned above, we also want measurements on clumps, so we 
also need to call @option{--clumpscat}.
 The following command will make these measurements on Segment's F160W output 
and write them in a catalog for each object and clump in a FITS table.
+For more on the Zeropoint, see @ref{Brightness flux magnitude}.
 
 @example
 $ mkdir cat
@@ -3162,7 +3163,7 @@ $ astmkcatalog seg/xdf-f105w.fits --ids --ra --dec 
--magnitude --sn \
 However, the galaxy properties might differ between the filters (which is the 
whole purpose behind observing in different filters!).
 Also, the noise properties and depth of the datasets differ.
 You can see the effect of these factors in the resulting clump catalogs, with 
Gnuastro's Table program.
-We Will go deep into working with tables in the next section, but in summary: 
the @option{-i} option will print information about the columns and number of 
rows.
+We will go deep into working with tables in the next section, but in summary: 
the @option{-i} option will print information about the columns and number of 
rows.
 To see the column values, just remove the @option{-i} option.
 In the output of each command below, look at the @code{Number of rows:}, and 
note that they are different.
 
@@ -3172,7 +3173,7 @@ $ asttable cat/xdf-f125w.fits -hCLUMPS -i
 $ asttable cat/xdf-f160w.fits -hCLUMPS -i
 @end example
 
-Matching the catalogs is possible (for example with @ref{Match}).
+Matching the catalogs is possible (for example, with @ref{Match}).
 However, the measurements of each column are also done on different pixels: 
the clump labels can/will differ from one filter to another for one object.
 Please open them and focus on one object to see for yourself.
 This can bias the result, if you match catalogs.
@@ -3209,7 +3210,7 @@ $ asttable cat/xdf-f125w-on-f160w-lab.fits -hCLUMPS -i
 $ asttable cat/xdf-f160w.fits -hCLUMPS -i
 @end example
 
-Finally, MakeCatalog also does basic calculations on the full dataset 
(independent of each labeled region but related to whole data), for example 
pixel area or per-pixel surface brightness limit.
+Finally, MakeCatalog also does basic calculations on the full dataset 
(independent of each labeled region but related to whole data), for example, 
pixel area or per-pixel surface brightness limit.
 They are stored as keywords in the FITS headers (or lines starting with 
@code{#} in plain text).
 This (and other ways to measure the limits of your dataset) are discussed in 
the next section: @ref{Measuring the dataset limits}.
 
@@ -3221,7 +3222,7 @@ Before measuring colors, or doing any other kind of 
analysis on the catalogs (an
 Without understanding the limitations of your dataset, you cannot make any 
physical interpretation of your results.
 The theory behind the calculations discussed here is thoroughly introduced in 
@ref{Quantifying measurement limits}.
 
-For example, with the command below, let's sort all the detected clumps in the 
image by magnitude and signal-to-noise ratio (S/N) columns after sorting:
+For example, with the command below, let's sort all the detected clumps in the 
image by magnitude (with @option{--sort=magnitude}) and and print the magnitude 
and signal-to-noise ratio (S/N; with @option{-cmagnitude,sn}):
 
 @example
 $ asttable cat/xdf-f160w.fits -hclumps -cmagnitude,sn \
@@ -3289,7 +3290,7 @@ This is technically known as upper-limit measurements.
 For a full discussion, see @ref{Upper limit magnitude of each detection}).
 
 Since it is for each separate object, the upper-limit measurements should be 
requested as extra columns in MakeCatalog's output.
-For example with the command below, let's generate a new catalog of the F160W 
filter, but with two extra columns compared to the one in @file{cat/}: the 
upper-limit magnitude and the upper-limit multiple of sigma.
+for example, with the command below, let's generate a new catalog of the F160W 
filter, but with two extra columns compared to the one in @file{cat/}: the 
upper-limit magnitude and the upper-limit multiple of sigma.
 
 @example
 $ astmkcatalog seg/xdf-f160w.fits --ids --ra --dec --magnitude --sn \
@@ -3309,8 +3310,8 @@ As you see, in almost all of the cases, the measured 
magnitude is sufficiently h
 Let's subtract the latter from the former to better see this difference in a 
third column:
 
 @example
-asttable xdf-f160w.fits -hclumps -cmagnitude,upperlimit_mag \
-         -c'arith upperlimit_mag magnitude -'
+$ asttable xdf-f160w.fits -hclumps -cmagnitude,upperlimit_mag \
+           -c'arith upperlimit_mag magnitude -'
 @end example
 
 The ones with a positive third column (difference) we can say that the clump 
seems to has sufficiently higher brightness than the noisy background to be 
usable.
@@ -3359,7 +3360,7 @@ $ asttable xdf-f160w.fits -hclumps 
--range=upperlimit_sigma,4.8:5.2 \
 
 We see that the @mymath{5\sigma} detection limit is @mymath{\sim29.6}!
 This is extremely deep!
-For example in the Legacy 
Survey@footnote{@url{https://www.legacysurvey.org/dr9/description}}, the 
@mymath{5\sigma} detection limit for @emph{point sources} is approximately 24.5 
(5 magnitudes, or 100 times, shallower than this image).
+for example, in the Legacy 
Survey@footnote{@url{https://www.legacysurvey.org/dr9/description}}, the 
@mymath{5\sigma} detection limit for @emph{point sources} is approximately 24.5 
(5 magnitudes, or 100 times, shallower than this image).
 
 As mentioned above, an important caveat in this simple calculation is that we 
should only be looking at point-like objects, not simply everything.
 This is because the shape or radial slope of the profile has an important 
effect on this measurement: at the same total magnitude, a sharper object will 
have a higher S/N.
@@ -3427,7 +3428,7 @@ $ rm f160w-hist.txt
 @end example
 
 Another important limiting parameter in a processed dataset is the surface 
brightness limit (@ref{Surface brightness limit of image}).
-The surface brightness limit of a dataset is an important measure for extended 
structures (for example when you want to look at the outskirts of galaxies).
+The surface brightness limit of a dataset is an important measure for extended 
structures (for example, when you want to look at the outskirts of galaxies).
 In the next tutorial, we have thoroughly described the derivation of the 
surface brightness limit of a dataset.
 So we will just show the final result here, and encourage you to follow up 
with that tutorial after finishing this tutorial (see @ref{Image surface 
brightness limit})
 
@@ -3472,7 +3473,7 @@ $ asttable cat/xdf-f160w.fits -h2          # Clumps 
catalog columns.
 
 @noindent
 As you see above, when given a specific table (file name and extension), Table 
will print the full contents of all the columns.
-To see the basic metadata about each column (for example name, units and 
comments), simply append a @option{--info} (or @option{-i}) to the command.
+To see the basic metadata about each column (for example, name, units and 
comments), simply append a @option{--info} (or @option{-i}) to the command.
 
 To print the contents of special column(s), just give the column number(s) 
(counting from @code{1}) or the column name(s) (if they have one) to the 
@option{--column} (or @option{-c}) option.
 For example, if you just want the magnitude and signal-to-noise ratio of the 
clumps (in the clumps catalog), you can get it with any of the following 
commands
@@ -3498,7 +3499,7 @@ Column meta-data (including a name) are not just limited 
to FITS tables and can
 
 @noindent
 Table also has tools to limit the displayed rows.
-For example with the first command below only rows with a magnitude in the 
range of 29 to 30 will be shown.
+for example, with the first command below only rows with a magnitude in the 
range of 29 to 30 will be shown.
 With the second command, you can further limit the displayed rows to rows with 
an S/N larger than 10 (a range between 10 to infinity).
 You can further sort the output rows, only show the top (or bottom) N rows and 
etc, for more see @ref{Table}.
 
@@ -3514,7 +3515,7 @@ Since @file{cat/xdf-f160w.fits} and 
@file{cat/xdf-f105w-on-f160w-lab.fits} have
 We can merge columns with the @option{--catcolumnfile} option like below.
 You give this option a file name (which is assumed to be a table that has the 
same number of rows as the main input), and all the table's columns will be 
concatenated/appended to the main table.
 Now, try it out with the commands below.
-We Will first look at the metadata of the first table (only the @code{CLUMPS} 
extension).
+We will first look at the metadata of the first table (only the @code{CLUMPS} 
extension).
 With the second command, we will concatenate the two tables and write them in, 
@file{two-in-one.fits} and finally, we will check the new catalog's metadata.
 
 @example
@@ -3538,12 +3539,12 @@ $ asttable three-in-one.fits -i
 @end example
 
 As you see, to avoid confusion in column names, Table has intentionally 
appended a @code{-1} to the column names of the first concatenated table if the 
column names are already present in the original table.
-For example we have the original @code{RA} column, and another one called 
@code{RA-1}).
+for example, we have the original @code{RA} column, and another one called 
@code{RA-1}).
 Similarly a @code{-2} has been added for the columns of the second 
concatenated table.
 
-However, this example clearly shows a problem with this full concatenation: 
some columns are identical (for example @code{HOST_OBJ_ID} and 
@code{HOST_OBJ_ID-1}), or not needed (for example @code{RA-1} and @code{DEC-1} 
which are not necessary here).
+However, this example clearly shows a problem with this full concatenation: 
some columns are identical (for example, @code{HOST_OBJ_ID} and 
@code{HOST_OBJ_ID-1}), or not needed (for example, @code{RA-1} and @code{DEC-1} 
which are not necessary here).
 In such cases, you can use @option{--catcolumns} to only concatenate certain 
columns, not the whole table.
-For example this command:
+for example, this command:
 
 @example
 $ asttable cat/xdf-f160w.fits -hCLUMPS --output=two-in-one-2.fits \
@@ -3580,7 +3581,7 @@ It takes four values:
 3) the column unit and
 4) the column comments.
 Since the comments are usually human-friendly sentences and contain space 
characters, you should put them in double quotations like below.
-For example by adding three calls of this option to the previous command, we 
write the filter name in the magnitude column name and description.
+for example, by adding three calls of this option to the previous command, we 
write the filter name in the magnitude column name and description.
 
 @example
 $ asttable cat/xdf-f160w.fits -hCLUMPS --output=three-in-one-3.fits \
@@ -3713,8 +3714,8 @@ $ aststatistics cat/mags-with-color.fits -cF125W-F160W
 @end example
 
 This tiny and cute ASCII histogram (and the general information printed above 
it) gives you a crude (but very useful and fast) feeling on the distribution.
-You can later use Gnuastro's Statistics program with the @option{--histogram} 
option to build a much more fine-grained histogram as a table to feed into your 
favorite plotting program for a much more accurate/appealing plot (for example 
with PGFPlots in @LaTeX{}).
-If you just want a specific measure, for example the mean, median and standard 
deviation, you can ask for them specifically, like below:
+You can later use Gnuastro's Statistics program with the @option{--histogram} 
option to build a much more fine-grained histogram as a table to feed into your 
favorite plotting program for a much more accurate/appealing plot (for example, 
with PGFPlots in @LaTeX{}).
+If you just want a specific measure, for example, the mean, median and 
standard deviation, you can ask for them specifically, like below:
 
 @example
 $ aststatistics cat/mags-with-color.fits -cF105W-F160W \
@@ -3738,7 +3739,7 @@ In a 2D histogram, the full range in both columns is 
divided into discrete 2D bi
 Since a 2D histogram is a pixelated space, we can simply save it as a FITS 
image and view it in a FITS viewer.
 Let's do this in the command below.
 As is common with color-magnitude plots, we will put the redder magnitude on 
the horizontal axis and the color on the vertical axis.
-We Will set both dimensions to have 100 bins (with @option{--numbins} for the 
horizontal and @option{--numbins2} for the vertical).
+We will set both dimensions to have 100 bins (with @option{--numbins} for the 
horizontal and @option{--numbins2} for the vertical).
 Also, to avoid strong outliers in any of the dimensions, we will manually set 
the range of each dimension with the @option{--greaterequal}, 
@option{--greaterequal2}, @option{--lessthan} and @option{--lessthan2} options.
 
 @example
@@ -3750,7 +3751,7 @@ $ aststatistics cat/mags-with-color.fits 
-cMAG-F160W,F105W-F160W \
 @end example
 
 @noindent
-You can now open this FITS file as a normal FITS image, for example with the 
command below.
+You can now open this FITS file as a normal FITS image, for example, with the 
command below.
 Try hovering/zooming over the pixels: not only will you see the number of 
objects in catalog that fall in each bin/pixel, but you also see the 
@code{F160W} magnitude and color of that pixel also (in the same place you 
usually see RA and Dec when hovering over an astronomical image).
 
 @example
@@ -3783,8 +3784,8 @@ $ astscript-fits-view cmd.fits --ds9scale=minmax \
 This is good for a fast progress update.
 But for your paper or more official report, you want to show something with 
higher quality.
 For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same 
font as your text, sharp grids and many other elegant/powerful features (like 
over-plotting interesting points, lines and etc).
-But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example PDF.
-We Will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
+But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example, PDF.
+We will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
 
 @example
 $ astconvertt cmd.fits --colormap=sls-inverse --borderwidth=0 -ocmd.pdf
@@ -3963,7 +3964,7 @@ $ astfits matched.fits
 @subsection Reddest clumps, cutouts and parallelization
 @cindex GNU AWK
 As a final step, let's go back to the original clumps-based color measurement 
we generated in @ref{Working with catalogs estimating colors}.
-We Will find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we will see how they are located on the image.
+We will find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we will see how they are located on the image.
 With the command below, we will select the reddest objects (those with a color 
larger than 1.5):
 
 @example
@@ -3978,7 +3979,7 @@ $ asttable cat/mags-with-color.fits 
--range=F105W-F160W,1.5,inf | wc -l
 @end example
 
 Let's crop the F160W image around each of these objects, but we first need a 
unique identifier for them.
-We Will define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
+We will define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
 Note that since we are making a plain text table, we will define the necessary 
(for the string-type first column) metadata manually (see @ref{Gnuastro text 
table format}).
 
 @example
@@ -4001,7 +4002,7 @@ $ astscript-ds9-region cat/reddest.txt -c2,3 --mode=wcs \
 
 We can now feed @file{cat/reddest.txt} into Gnuastro's Crop program to get 
separate postage stamps for each object.
 To keep things clean, we will make a directory called @file{crop-red} and ask 
Crop to save the crops in this directory.
-We Will also add a @file{-f160w.fits} suffix to the crops (to remind us which 
filter they came from).
+We will also add a @file{-f160w.fits} suffix to the crops (to remind us which 
filter they came from).
 The width of the crops will be 15 arc-seconds (or 15/3600 degrees, which is 
the units of the WCS).
 
 @example
@@ -4038,7 +4039,7 @@ You can now use your general graphic user interface image 
viewer to flip through
 @cindex @file{Makefile}
 The @code{for} loop above to convert the images will do the job in series: 
each file is converted only after the previous one is complete.
 But like the crops, each JPEG image is independent, so let's parallelize it.
-In other words, we want to write run more than one instance of the command at 
any moment.
+In other words, we want to run more than one instance of the command at any 
moment.
 To do that, we will use @url{https://en.wikipedia.org/wiki/Make_(software), 
Make}.
 Make is a very wonderful pipeline management system, and the most common and 
powerful implementation is @url{https://www.gnu.org/software/make, GNU Make}, 
which has a complete manual just like this one.
 We cannot go into the details of Make here, for a hands-on video tutorial, see 
this @url{https://peertube.stream/w/iJitjS3r232Z8UPMxKo6jq, video introduction}.
@@ -4063,7 +4064,7 @@ $ make -j12
 @noindent
 Did you notice how much faster this one was?
 When possible, it is always very helpful to do your analysis in parallel.
-You can build very complex workflows with Make, for example see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. (2021)} so it is worth 
spending some time to master.
+You can build very complex workflows with Make, for example, see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. (2021)} so it is worth 
spending some time to master.
 
 @node FITS images in a publication, Marking objects for publication, Reddest 
clumps cutouts and parallelization, General program usage tutorial
 @subsection FITS images in a publication
@@ -4081,7 +4082,7 @@ Once you have made the EPS, you can then convert it to 
PDF with the @command{eps
 
 Another solution is to use Gnuastro's ConvertType progarm.
 The main difference is that DS9 is a Graphic User Interfac (GUI) program, so 
it takes relatively long (about a second) to load, and it requires many 
dependencies.
-This will slow-down automatic conversion of many files, and will make your 
code hard to more to another operating system.
+This will slow-down automatic conversion of many files, and will make your 
code hard to move to another operating system.
 DS9 does have a command-line interface that you can use to automate the 
creation of each file, however, it has a very peculiar command-line interface 
and formats (like the ``region'' files).
 However, in ConvertType, there is no graphic interface, so it has very few 
dependencies, it is fast, and finally, it takes normal tables (in plain-text or 
FITS) as input.
 So in this concluding step of the analysis, let's build a nice 
publication-ready plot, showing the positions of the reddest objects in the 
image for our paper.
@@ -4195,7 +4196,7 @@ This is because of those negative pixels that were set to 
NaN.
 To better show these structures, we should warp the image to larger pixels.
 So let's warp it to a pixel grid where the new pixels are @mymath{4\times4} 
larger than the original pixels.
 But be careful that warping should be done on the original image, not on the 
surface brightness image.
-We should recalculate the surface brightness image after the warping is one.
+We should re-calculate the surface brightness image after the warping is one.
 This is because @mymath{log(a+b)\ne log(a)+log(b)}.
 Recall that surface brightness calculation involves a logarithm, and warping 
involves addition of pixel values.
 
@@ -4262,7 +4263,7 @@ To add coordiantes on the edges of the figure in your 
paper, see @ref{Annotation
 
 To start with, let's put a red plus sign over the sub-sample of reddest clumps 
similar to @ref{Reddest clumps cutouts and parallelization}.
 First, we will need to make the table of marks.
-We Will choose those with a color stronger than 1.5 magnitudes and a 
signal-to-noise ratio (in F160W) larger than 5.
+We will choose those with a color stronger than 1.5 magnitudes and a 
signal-to-noise ratio (in F160W) larger than 5.
 We also only need the RA, Dec, color and magnitude (in F160W) columns.
 
 @example
@@ -4347,7 +4348,7 @@ $ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
 The power of this methodology is that each mark can be completely different!
 For example, let's show the objects with a color less than 2 magnitudes with a 
circle, and those with a stronger color with a plus (recall that the code for a 
circle was @code{1} and that of a plus was @code{2}).
 You only need to replace the first command above with the one below.
-After wards, run the rest of the commands in the last code-block.
+Afterwards, run the rest of the commands in the last code-block.
 
 @example
 $ asttable reddest-cat.fits -cF105W-F160W \
@@ -4410,7 +4411,7 @@ $ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
 @end example
 
 As one final example, let's write the magnitude of each object under it.
-Since the magnitude is already in the @file{marks.fits} that we produced 
above, adding it is very easy (just add @option{--marktext} option to 
ConvertType):
+Since the magnitude is already in the @file{marks.fits} that we produced 
above, it is very easy to add it (just add @option{--marktext} option to 
ConvertType):
 
 @example
 $ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 \
@@ -4460,8 +4461,8 @@ $ astconvertt sb.fits --output=ellipse.pdf --fluxlow=20 \
 
 @cindex Point (Vector graphics; PostScript)
 To conclude this section, let us highlight an important factor to consider in 
vector graphics.
-In ConvertType, things like line width or font size are defined in units of 
``point''s.
-In vector graphics standards, 72 ``point''s correspond to one 1 inch.
+In ConvertType, things like line width or font size are defined in units of 
@emph{points}.
+In vector graphics standards, 72 points correspond to one inch.
 Therefore, one way you can change these factors for all the objects is to 
assign a larger or smaller print size to the image.
 The print size is just a meta-data entry, and will not affect the file's 
volume in bytes!
 You can do this with the @option{--widthincm} option.
@@ -4476,8 +4477,8 @@ To benefit most effectively from this subsection, please 
go through the previous
 @cindex @command{history}
 @cindex Shell history
 Each sub-section/step of the sub-sections above involved several commands on 
the command-line.
-Therefore, if you want to reproduce the previous results (for example to only 
change one part, and see its effect), you'll have to go through all the 
sections above and read through them again.
-If you done the commands recently, you may also have them in the history of 
your shell (command-line environment).
+Therefore, if you want to reproduce the previous results (for example, to only 
change one part, and see its effect), you'll have to go through all the 
sections above and read through them again.
+If you have ran the commands recently, you may also have them in the history 
of your shell (command-line environment).
 You can see many of your previous commands on the shell (even if you have 
closed the terminal) with the @command{history} command, like this:
 
 @example
@@ -4544,7 +4545,7 @@ Through the @code{env} program, this shebang will look in 
your @code{PATH} and u
 But for simplicity in the rest of the tutorial, we will continue with the 
`@code{#!/bin/bash}' shebang.
 
 Using your favorite text editor, make a new empty file, let's call it 
@file{my-first-script.sh}.
-Write the GNU Bash shebang (above) as its first line
+Write the GNU Bash shebang (above) as its first line.
 After the shebang, copy the series of commands we ran above.
 Just note that the `@code{$}' sign at the start of every line above is the 
prompt of the interactive shell (you never actually typed it, remember?).
 Therefore, commands in a shell script should not start with a `@code{$}'.
@@ -4567,8 +4568,8 @@ aststatistics a.fits
 The script contents are now ready, but to run it, you should activate the 
script file's @emph{executable flag}.
 In Unix-like operating systems, every file has three types of flags: 
@emph{read} (or @code{r}), @emph{write} (or @code{w}) and @emph{execute} (or 
@code{x}).
 To toggle a file's flags, you should use the @command{chmod} (for ``change 
mode'') command.
-To activate a flag, you put a `@code{+}' before the flag character (for 
example @code{+x}).
-To deactivate it, you put a `@code{-}' (for example @code{-x}).
+To activate a flag, you put a `@code{+}' before the flag character (for 
example, @code{+x}).
+To deactivate it, you put a `@code{-}' (for example, @code{-x}).
 In this case, you want to activate the script's executable flag, so you should 
run
 
 @example
@@ -4811,7 +4812,7 @@ $ astnoisechisel --cite
 @node Detecting large extended targets, Building the extended PSF, General 
program usage tutorial, Tutorials
 @section Detecting large extended targets
 
-The outer wings of large and extended objects can sink into the noise very 
gradually and can have a large variety of shapes (for example due to tidal 
interactions).
+The outer wings of large and extended objects can sink into the noise very 
gradually and can have a large variety of shapes (for example, due to tidal 
interactions).
 Therefore separating the outer boundaries of the galaxies from the noise can 
be particularly tricky.
 Besides causing an under-estimation in the total estimated brightness of the 
target, failure to detect such faint wings will also cause a bias in the noise 
measurements, thereby hampering the accuracy of any measurement on the dataset.
 Therefore even if they do not constitute a significant fraction of the 
target's light, or are not your primary target, these regions must not be 
ignored.
@@ -4827,8 +4828,8 @@ Basic features like access to this book on the 
command-line, the configuration f
 @cindex NGC5195
 @cindex SDSS, Sloan Digital Sky Survey
 @cindex Sloan Digital Sky Survey, SDSS
-We Will try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
-We Will use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
+We will try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
+We will use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
 Due to its more peculiar low surface brightness structure/features, we will 
focus on the dwarf companion galaxy of the group (or NGC 5195).
 
 
@@ -4932,7 +4933,7 @@ Let's see how NoiseChisel operates on it with its default 
parameters:
 $ astnoisechisel r.fits -h0
 @end example
 
-As described in @ref{NoiseChisel and Multiextension FITS files}, NoiseChisel's 
default output is a multi-extension FITS file.
+As described in @ref{NoiseChisel and Multi-Extension FITS files}, 
NoiseChisel's default output is a multi-extension FITS file.
 Open the output @file{r_detected.fits} file and have a look at the extensions, 
the 0-th extension is only meta-data and contains NoiseChisel's configuration 
parameters.
 The rest are the Sky-subtracted input, the detection map, Sky values and Sky 
standard deviation.
 
@@ -4963,7 +4964,7 @@ This important concept will be discussed more extensively 
in the next section (@
 However, skewness is only a proxy for signal when the signal has structure 
(varies per pixel).
 Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the constant signal's effect is just to shift the symmetric center 
of the noise distribution to the positive and there will not be any skewness 
(major difference between the mean and median).
 This positive@footnote{In processed images, where the Sky value can be 
over-estimated, this constant shift can be negative.} shift that preserves the 
symmetric distribution is the Sky value.
-When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
+When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example, see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
 
 To make this very large diffuse/flat signal detectable, you will therefore 
need a larger tile to contain a larger change in the values within it (and 
improve number statistics, for less scatter when measuring the mean and median).
 So let's play with the tessellation a little to see how it affects the result.
@@ -5019,7 +5020,7 @@ You have two strategies for detecting the outskirts of 
the merging galaxies:
 2) Strengthen the outlier rejection parameters to discard more of the tiles 
with signal.
 Fortunately in this image we have a sufficiently large region on the right 
side of the image that the galaxy does not extend to.
 So we can use the more robust first solution.
-In situations where this does not happen (for example if the field of view in 
this image was shifted to the left to have more of M51 and less sky) you are 
limited to a combination of the two solutions or just to the second solution.
+In situations where this does not happen (for example, if the field of view in 
this image was shifted to the left to have more of M51 and less sky) you are 
limited to a combination of the two solutions or just to the second solution.
 
 @cartouche
 @noindent
@@ -5406,7 +5407,7 @@ $ astarithmetic --quiet $nsigma $std x \
 @end example
 
 The customizable steps above are good for any type of mask.
-For example your field of view may contain a very deep part so you need to 
mask all the shallow parts @emph{as well as} the detections before these steps.
+for example, your field of view may contain a very deep part so you need to 
mask all the shallow parts @emph{as well as} the detections before these steps.
 But when your image is flat (like this), there is a much simpler method to 
obtain the same value through MakeCatalog (when the standard deviation image is 
made by NoiseChisel).
 NoiseChisel has already calculated the minimum (@code{MINSTD}), maximum 
(@code{MAXSTD}) and median (@code{MEDSTD}) standard deviation within the tiles 
during its processing and has stored them as FITS keywords within the 
@code{SKY_STD} HDU.
 You can see them by piping all the keywords in this HDU into @command{grep}.
@@ -5433,7 +5434,7 @@ However, MakeCatalog requires at least one column, so we 
will only ask for the @
 The output catalog will therefore have a single row and a single column, with 
1 as its value@footnote{Recall that NoiseChisel's output is a binary image: 
0-valued pixels are noise and 1-valued pixel are signal.
 NoiseChisel does not identify sub-structure over the signal, this is the job 
of Segment, see @ref{Extract clumps and objects}.}.
 @item
-If we do not ask for any noise-related column (for example the signal-to-noise 
ratio column with @option{--sn}, among other noise-related columns), 
MakeCatalog is not going to read the noise standard deviation image (again, to 
speed up its operation when it is redundant).
+If we do not ask for any noise-related column (for example, the 
signal-to-noise ratio column with @option{--sn}, among other noise-related 
columns), MakeCatalog is not going to read the noise standard deviation image 
(again, to speed up its operation when it is redundant).
 We are thus using the @option{--forcereadstd} option (short for ``force read 
standard deviation image'') here so it is ready for the surface brightness 
limit measurements that are written as keywords.
 @end itemize
 
@@ -5550,7 +5551,7 @@ In fact, if you run the MakeCatalog command again and 
look at the measured upper
 Please try exactly the same MakeCatalog command above a few times to see how 
it changes.
 
 This is because of the @emph{random} factor in the upper-limit measurements: 
every time you run it, different random points will be checked, resulting in a 
slightly different distribution.
-You can decrease the random scatter by increasing the number of random checks 
(for example setting @option{--upnum=100000}, compared to 1000 in the command 
above).
+You can decrease the random scatter by increasing the number of random checks 
(for example, setting @option{--upnum=100000}, compared to 1000 in the command 
above).
 But this will be slower and the results will not be exactly reproducible.
 The only way to ensure you get an identical result later is to fix the random 
number generator function and seed like the command below@footnote{You can use 
any integer for the seed. One recommendation is to run MakeCatalog without 
@option{--envseed} once and use the randomly generated seed that is printed on 
the terminal.}.
 This is a very important point regarding any statistical process involving 
random numbers, please see @ref{Generating random numbers}.
@@ -5609,7 +5610,7 @@ $ astarithmetic r_detected.fits -hDETECTIONS set-det \
 You can find the label of the main galaxy visually (by opening the image and 
hovering your mouse over the M51 group's label).
 But to have a little more fun, let's do this automatically (which is necessary 
in a general scenario).
 The M51 group detection is by far the largest detection in this image, this 
allows us to find its ID/label easily.
-We Will first run MakeCatalog to find the area of all the labels, then we will 
use Table to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
+We will first run MakeCatalog to find the area of all the labels, then we will 
use Table to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
 
 @example
 # Run MakeCatalog to find the area of each label.
@@ -5641,7 +5642,7 @@ $ astarithmetic labeled.fits $id eq -oonly-m51.fits
 Open the image and have a look.
 To separate the outer edges of the detections, we will need to ``erode'' the 
M51 group detection.
 So in the same Arithmetic command as above, we will erode three times (to have 
more pixels and thus less scatter), using a maximum connectivity of 2 
(8-connected neighbors).
-We Will then save the output in @file{eroded.fits}.
+We will then save the output in @file{eroded.fits}.
 
 @example
 $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
@@ -5650,7 +5651,7 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 
erode \
 
 @noindent
 In @file{labeled.fits}, we can now set all the 1-valued pixels of 
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the 
previous command.
-We Will need the pixels of the M51 group in @code{labeled.fits} two times: 
once to do the erosion, another time to find the outer pixel layer.
+We will need the pixels of the M51 group in @code{labeled.fits} two times: 
once to do the erosion, another time to find the outer pixel layer.
 To do this (and be efficient and more readable) we will use the @code{set-i} 
operator (to give this image the name `@code{i}').
 In this way we can use it any number of times afterwards, while only reading 
it from disk and finding M51's pixels once.
 
@@ -5690,7 +5691,7 @@ $ astarithmetic edge.fits -h1                  set-edge \
 We have thus detected the wings of the M51 group down to roughly 1/3rd of the 
noise level in this image which is a very good achievement!
 But the per-pixel S/N is a relative measurement.
 Let's also measure the depth of our detection in absolute surface brightness 
units; or magnitudes per square arc-seconds (see @ref{Brightness flux 
magnitude}).
-We Will also ask for the S/N and magnitude of the full edge we have defined.
+We will also ask for the S/N and magnitude of the full edge we have defined.
 Fortunately doing this is very easy with Gnuastro's MakeCatalog:
 
 @example
@@ -5718,7 +5719,7 @@ If the M51 group in this image was larger/smaller than 
this (the field of view w
 In @ref{NoiseChisel optimization} we found a good detection map over the 
image, so pixels harboring signal have been differentiated from those that do 
not.
 For noise-related measurements like the surface brightness limit, this is fine.
 However, after finding the pixels with signal, you are most likely interested 
in knowing the sub-structure within them.
-For example how many star forming regions (those bright dots along the spiral 
arms) of M51 are within this image?
+for example, how many star forming regions (those bright dots along the spiral 
arms) of M51 are within this image?
 What are the colors of each of these star forming regions?
 In the outer most wings of M51, which pixels belong to background galaxies and 
foreground stars?
 And many more similar questions.
@@ -5832,7 +5833,7 @@ Using a larger kernel can also help in detecting the 
existing clumps to fainter
 You can make any kernel easily using the @option{--kernel} option in 
@ref{MakeProfiles}.
 But note that a larger kernel is also going to wash-out many of the 
sharp/small clumps close to the center of M51 and also some smaller peaks on 
the wings.
 Please continue playing with Segment's configuration to obtain a more complete 
result (while keeping reasonable purity).
-We Will finish the discussion on finding true clumps at this point.
+We will finish the discussion on finding true clumps at this point.
 
 The properties of the clumps within M51, or the background objects can then 
easily be measured using @ref{MakeCatalog}.
 To measure the properties of the background objects (detected as clumps over 
the diffuse region), you should not mask the diffuse region.
@@ -6193,7 +6194,7 @@ But in the first command below, let's delete all the 
temporary files we made abo
 
 Since the image is large (+300 MB), to avoid wasting storage, any temporary 
file that is no longer necessary for later processing is deleted after it is 
used.
 You can visually check each of them with DS9 before deleting them (or not 
delete them at all!).
-Generally, within a pipeline it is best to remove such large temporary files, 
because space runs out much faster than you think (for example once you get 
good results and want to use more fields).
+Generally, within a pipeline it is best to remove such large temporary files, 
because space runs out much faster than you think (for example, once you get 
good results and want to use more fields).
 
 @example
 $ rm *.fits
@@ -6590,7 +6591,7 @@ This is a @emph{very important} point: this clearly shows 
that the inner profile
 Click-and-hold your mouse to see the inner parts of the two profiles (in the 
range 0 to 80).
 You will see that for radii less than 40 pixels, the inner profile (blue) 
points loose their scatter (and thus have a good signal-to-noise ratio).
 @item
-Zoom-in to the plot and follow the profiles until smaller radii (for example 
10 pixels).
+Zoom-in to the plot and follow the profiles until smaller radii (for example, 
10 pixels).
 You see that for each radius, the inner (blue) points are consistently above 
the outer (red) points.
 This shows that the @mymath{\times100} factor we selected above was too much.
 @item
@@ -6815,7 +6816,7 @@ $ astscript-fits-view label/67510-seg.fits 
single-star/subtracted.fits \
            --ds9center=$center --ds9mode=wcs --ds9extra="-zoom 10"
 @end example
 
-In a large survey like J-PLUS, it is easy to use more and more bright stars 
from different pointings (ideally with similar FWHM and similar telescope 
properties@footnote{For example in J-PLUS, the baffle of the secondary mirror 
was adjusted in 2017 because it produced extra spikes in the PSF. So all images 
after that date have a PSF with 4 spikes (like this one), while those before it 
have many more spikes.}) to improve the S/N of the PSF.
+In a large survey like J-PLUS, it is easy to use more and more bright stars 
from different pointings (ideally with similar FWHM and similar telescope 
properties@footnote{for example, in J-PLUS, the baffle of the secondary mirror 
was adjusted in 2017 because it produced extra spikes in the PSF. So all images 
after that date have a PSF with 4 spikes (like this one), while those before it 
have many more spikes.}) to improve the S/N of the PSF.
 As explained before, we designed the output files of this tutorial with the 
@code{67510} (which is this image's pointing label in J-PLUS) where necessary 
so you see how easy it is to add more pointings to use in the creation of the 
PSF.
 
 Let's consider now more than one single star.
@@ -7011,7 +7012,7 @@ The general outline of the steps he wants to take are:
 @item
 Make some mock profiles in an over-sampled image.
 The initial mock image has to be over-sampled prior to convolution or other 
forms of transformation in the image.
-Through his experiences, Sufi knew that this is because the image of heavenly 
bodies is actually transformed by the atmosphere or other sources outside the 
atmosphere (for example gravitational lenses) prior to being sampled on an 
image.
+Through his experiences, Sufi knew that this is because the image of heavenly 
bodies is actually transformed by the atmosphere or other sources outside the 
atmosphere (for example, gravitational lenses) prior to being sampled on an 
image.
 Since that transformation occurs on a continuous grid, to best approximate it, 
he should do all the work on a finer pixel grid.
 In the end he can re-sample the result to the initially desired grid size.
 
@@ -7198,7 +7199,7 @@ As you see, the first line that you put in the file is no 
longer present!
 This happens because `@code{>}' always starts dumping content to a file from 
the start of the file.
 In effect, this means that any possibly pre-existing content is over-written 
by the new content!
 To append new lines (or dumping new content at the end of existing content), 
you can use `@code{>>}'.
-For example with the commands below, first we will write the first sentence 
(using `@code{>}'), then use `@code{>>}' to add the second and third sentences.
+for example, with the commands below, first we will write the first sentence 
(using `@code{>}'), then use `@code{>>}' to add the second and third sentences.
 Finally, we will print the contents of @file{test.txt} to confirm that all 
three lines are preserved.
 
 @example
@@ -7222,7 +7223,7 @@ $ rm test.txt
 To put the catalog of profile data and their metadata (that was described 
above) into a file, Sufi uses the commands below.
 While Sufi was writing these commands, the student complained that ``I could 
have done in this in a text editor''.
 Sufi reminded the student that it is indeed possible; but it requires manual 
intervention.
-The advantage of a solution like below is that it can be automated (for 
example adding more rows; for more profiles in the final image).
+The advantage of a solution like below is that it can be automated (for 
example, adding more rows; for more profiles in the final image).
 
 @example
 $ echo "# Column 1:  ID    [counter, u8] Identifier" > cat.txt
@@ -7311,7 +7312,7 @@ HDU (extension) information: 'cat_profiles.fits'.
 
 @noindent
 So Sufi explained why oversampling is important in modeling, especially for 
parts of the image where the flux change is significant over a pixel.
-Recall that when you oversample the model (for example by 5 times), for every 
desired pixel, you get 25 pixels (@mymath{5\times5}).
+Recall that when you oversample the model (for example, by 5 times), for every 
desired pixel, you get 25 pixels (@mymath{5\times5}).
 Sufi then explained that after convolving (next step below) we will 
down-sample the image to get our originally desired size/resolution.
 
 After seeing the image, the student complained that only the large elliptical 
model for the Andromeda nebula can be seen in the center.
@@ -7559,13 +7560,13 @@ The latest released version of Gnuastro source code is 
always available at the f
 
 @noindent
 @ref{Quick start} describes the commands necessary to configure, build, and 
install Gnuastro on your system.
-This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example your system lacks a mandatory/optional dependency (in 
other words, you cannot pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
+This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example, your system lacks a mandatory/optional dependency (in 
other words, you cannot pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
 Thus if you were happy with downloading the tarball and following @ref{Quick 
start}, then you can safely ignore this chapter and come back to it in the 
future if you need more customization.
 
 @ref{Dependencies} describes the mandatory, optional and bootstrapping 
dependencies of Gnuastro.
 Only the first group are required/mandatory when you are building Gnuastro 
using a tarball (see @ref{Release tarball}), they are very basic and low-level 
tools used in most astronomical software, so you might already have them 
installed, if not they are very easy to install as described for each.
 @ref{Downloading the source} discusses the two methods you can obtain the 
source code: as a tarball (a significant snapshot in Gnuastro's history), or 
the full history@footnote{@ref{Bootstrapping dependencies} are required if you 
clone the full history.}.
-The latter allows you to build Gnuastro at any random point in its history 
(for example to get bug fixes or new features that are not released as a 
tarball yet).
+The latter allows you to build Gnuastro at any random point in its history 
(for example, to get bug fixes or new features that are not released as a 
tarball yet).
 
 The building and installation of Gnuastro is heavily customizable, to learn 
more about them, see @ref{Build and install}.
 This section is essentially a thorough explanation of the steps in @ref{Quick 
start}.
@@ -7590,7 +7591,7 @@ A minimal set of dependencies are mandatory for building 
Gnuastro from the stand
 If they are not present you cannot pass Gnuastro's configuration step.
 The mandatory dependencies are therefore very basic (low-level) tools which 
are easy to obtain, build and install, see @ref{Mandatory dependencies} for a 
full discussion.
 
-If you have the packages of @ref{Optional dependencies}, Gnuastro will have 
additional functionality (for example converting FITS images to JPEG or PDF).
+If you have the packages of @ref{Optional dependencies}, Gnuastro will have 
additional functionality (for example, converting FITS images to JPEG or PDF).
 If you are installing from a tarball as explained in @ref{Quick start}, you 
can stop reading after this section.
 If you are cloning the version controlled source (see @ref{Version controlled 
source}), an additional bootstrapping step is required before configuration and 
its dependencies are explained in @ref{Bootstrapping dependencies}.
 
@@ -7624,7 +7625,7 @@ In this section we explain each program and any specific 
note that might be nece
 @subsubsection GNU Scientific Library
 
 @cindex GNU Scientific Library
-The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is 
a large collection of functions that are very useful in scientific 
applications, for example integration, random number generation, and Fast 
Fourier Transform among many others.
+The @url{http://www.gnu.org/software/gsl/, GNU Scientific Library}, or GSL, is 
a large collection of functions that are very useful in scientific 
applications, for example, integration, random number generation, and Fast 
Fourier Transform among many others.
 To install GSL from source, you can run the following commands after you have 
downloaded @url{http://ftpmirror.gnu.org/gsl/gsl-latest.tar.gz, 
@file{gsl-latest.tar.gz}}:
 
 @example
@@ -7656,7 +7657,7 @@ If CFITSIO was not configured with this option, any 
program which needs this cap
 To install CFITSIO from source, we strongly recommend that you have a look 
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and 
understand the options you can pass to @command{$ ./configure} (they are not 
too much).
 This is a very basic package for most astronomical software and it is best 
that you configure it nicely with your system.
 Once you download the source and unpack it, the following configure script 
should be enough for most purposes.
-Do Not forget to read chapter two of the manual though, for example the second 
option is only for 64bit systems.
+Do Not forget to read chapter two of the manual though, for example, the 
second option is only for 64bit systems.
 The manual also explains how to check if it has been installed correctly.
 
 CFITSIO comes with two executable files called @command{fpack} and 
@command{funpack}.
@@ -7825,7 +7826,7 @@ Therefore its headers (and libraries) are not needed.
 @cindex Python3
 Python is a high-level programming language and Numpy is the most commonly 
used library within Python to add multi-dimensional arrays and matrices.
 If version 3 of Python is available with a corresponding Numpy Library, 
Gnuastro's library will be built with some Python-related helper functions.
-Python wrappers for Gnuastro's library (for example `pyGnuastro') can use 
these functions when being built from source.
+Python wrappers for Gnuastro's library (for example, `pyGnuastro') can use 
these functions when being built from source.
 For more on Gnuastro's Python helper functions, see @ref{Python interface}.
 
 @cindex PyPI
@@ -7938,7 +7939,7 @@ Just do not forget that it has to be in the same 
directory as Gnulib (described
 @cindex GNU Texinfo
 GNU Texinfo is the tool that formats this manual into the various output 
formats.
 To bootstrap Gnuastro you need all of Texinfo's command-line programs.
-However, some operating systems package them separately, for example in 
Fedora, @command{makeinfo} is packaged in the @command{texinfo-tex} package.
+However, some operating systems package them separately, for example, in 
Fedora, @command{makeinfo} is packaged in the @command{texinfo-tex} package.
 
 To check that you have a working GNU Texinfo in your system, you can try this 
command:
 
@@ -8058,7 +8059,7 @@ It is the same for all the dependencies of Gnuastro.
 
 @item
 For each package, Gnuastro might preform better (or require) certain 
configuration options that your distribution's package managers did not add for 
you.
-If present, these configuration options are explained during the installation 
of each in the sections below (for example in @ref{CFITSIO}).
+If present, these configuration options are explained during the installation 
of each in the sections below (for example, in @ref{CFITSIO}).
 When the proper configuration has not been set, the programs should complain 
and inform you.
 
 @item
@@ -8086,9 +8087,9 @@ GNU/Linux
 
distributions@footnote{@url{https://en.wikipedia.org/wiki/List_of_Linux_distributions#Debian-based}}.
 It thus has a very extended user community and a robust internal structure and 
standards.
 All of it is free software and based on the work of volunteers around the 
world.
-Many distributions are thus derived from it, for example Ubuntu and Linux Mint.
+Many distributions are thus derived from it, for example, Ubuntu and Linux 
Mint.
 This arguably makes Debian-based OSs the largest, and most used, class of 
GNU/Linux distributions.
-All of them use Debian's Advanced Packaging Tool (APT, for example 
@command{apt-get}) for managing packages.
+All of them use Debian's Advanced Packaging Tool (APT, for example, 
@command{apt-get}) for managing packages.
 
 @table @asis
 @item Mandatory dependencies
@@ -8358,7 +8359,7 @@ Gnuastro's official/stable tarball is released with two 
formats: Gzip (with suff
 The pre-release tarballs (after version 0.3) are released only as an Lzip 
tarball.
 Gzip is a very well-known and widely used compression program created by GNU 
and available in most systems.
 However, Lzip provides a better compression ratio and more robust archival 
capacity.
-For example Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip 
respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip web page} 
for more.
+for example, Gnuastro 0.3's tarball was 2.9MB and 4.3MB with Lzip and Gzip 
respectively, see the @url{http://www.nongnu.org/lzip/lzip.html, Lzip web page} 
for more.
 Lzip might not be pre-installed in your operating system, if so, installing it 
from your operating system's package manager or from source is very easy and 
fast (it is a very small program).
 
 The GNU FTP server is mirrored (has backups) in various locations on the globe 
(@url{http://www.gnu.org/order/ftp.html}).
@@ -8372,7 +8373,7 @@ Also note that if you want to download immediately after 
and announcement (see @
 
 @cindex Git
 @cindex Version control
-The publicly distributed Gnuastro tar-ball (for example 
@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a 
snapshot of the source code at one significant instant of Gnuastro's history 
(specified by the version number, see @ref{Version numbering}), ready to be 
configured and built.
+The publicly distributed Gnuastro tar-ball (for example, 
@file{gnuastro-X.X.tar.gz}) does not contain the revision history, it is only a 
snapshot of the source code at one significant instant of Gnuastro's history 
(specified by the version number, see @ref{Version numbering}), ready to be 
configured and built.
 To be able to develop successfully, the revision history of the code can be 
very useful to track when something was added or changed, also some updates 
that are not yet officially released might be in it.
 
 We use Git for the version control of Gnuastro.
@@ -8392,7 +8393,7 @@ $ git clone git://git.sv.gnu.org/gnuastro.git
 @noindent
 The @file{$TOPGNUASTRO/gnuastro} directory will contain hand-written (version 
controlled) source code for Gnuastro's programs, libraries, this book and the 
tests.
 All are divided into sub-directories with standard and very descriptive names.
-The version controlled files in the top cloned directory are either mainly in 
capital letters (for example @file{THANKS} and @file{README}) or mainly written 
in small-caps (for example @file{configure.ac} and @file{Makefile.am}).
+The version controlled files in the top cloned directory are either mainly in 
capital letters (for example, @file{THANKS} and @file{README}) or mainly 
written in small-caps (for example, @file{configure.ac} and @file{Makefile.am}).
 The former are non-programming, standard writing for human readers containing 
high-level information about the whole package.
 The latter are instructions to customize the GNU build system for Gnuastro.
 For more on Gnuastro's source code structure, please see @ref{Developing}.
@@ -8422,7 +8423,7 @@ Very soon after you have cloned it, Gnuastro's main 
@file{master} branch will be
 @cindex Automatically created build files
 @noindent
 The version controlled source code lacks the source files that we have not 
written or are automatically built.
-These automatically generated files are included in the distributed tar ball 
for each distribution (for example @file{gnuastro-X.X.tar.gz}, see @ref{Version 
numbering}) and make it easy to immediately configure, build, and install 
Gnuastro.
+These automatically generated files are included in the distributed tar ball 
for each distribution (for example, @file{gnuastro-X.X.tar.gz}, see 
@ref{Version numbering}) and make it easy to immediately configure, build, and 
install Gnuastro.
 However from the perspective of version control, they are just bloatware and 
sources of confusion (since they are not changed by Gnuastro developers).
 
 The process of automatically building and importing necessary files into the 
cloned directory is known as @emph{bootstrapping}.
@@ -8562,7 +8563,7 @@ To see exactly what has been changed in the source code 
along with the commit me
 If you want to make changes in the code, have a look at @ref{Developing} to 
get started easily.
 Be sure to commit your changes in a separate branch (keep your @code{master} 
branch to follow the official repository) and re-run @command{autoreconf -f} 
after the commit.
 If you intend to send your work to us, you can safely use your commit since it 
will be ultimately recorded in Gnuastro's official history.
-If not, please upload your separate branch to a public hosting service, for 
example @url{https://codeberg.org, Codeberg}, and link to it in your 
report/paper.
+If not, please upload your separate branch to a public hosting service, for 
example, @url{https://codeberg.org, Codeberg}, and link to it in your 
report/paper.
 Alternatively, run @command{make distcheck} and upload the output 
@file{gnuastro-X.X.X.XXXX.tar.gz} to a publicly accessible web page so your 
results can be considered scientific (reproducible) later.
 
 
@@ -8676,7 +8677,7 @@ So this option is only useful when a segmentation fault 
is found during @command
 
 @item --enable-progname
 Only build and install @file{progname} along with any other program that is 
enabled in this fashion.
-@file{progname} is the name of the executable without the @file{ast}, for 
example @file{crop} for Crop (with the executable name of @file{astcrop}).
+@file{progname} is the name of the executable without the @file{ast}, for 
example, @file{crop} for Crop (with the executable name of @file{astcrop}).
 
 Note that by default all the programs will be installed.
 This option (and the @option{--disable-progname} options) are only relevant 
when you do not want to install all the programs.
@@ -8740,7 +8741,7 @@ libtiff is an optional dependency, with this option, 
Gnuastro will ignore any po
 @cindex PyPI
 @cindex Python
 Don't build the Python interface within Gnuastro's dynamic library.
-This interface can be used for easy communication with Python wrappers (for 
example the pyGnuastro package).
+This interface can be used for easy communication with Python wrappers (for 
example, the pyGnuastro package).
 However upon installing the pyGnuastro package from PyPI, the correct 
configuration of the Gnuastro Library(with the python interface) is already 
packaged with it.
 The Python interface is only necessary if you want to build pyGnuastro from 
source.
 For more on the interface functions, see @ref{Python interface}.
@@ -8748,7 +8749,7 @@ For more on the interface functions, see @ref{Python 
interface}.
 @end vtable
 
 The tests of some programs might depend on the outputs of the tests of other 
programs.
-For example MakeProfiles is one the first programs to be tested when you run 
@command{$ make check}.
+for example, MakeProfiles is one the first programs to be tested when you run 
@command{$ make check}.
 MakeProfiles' test outputs (FITS images) are inputs to many other programs 
(which in turn provide inputs for other programs).
 Therefore, if you do not install MakeProfiles for example, the tests for many 
the other programs will be skipped.
 To avoid this, in one run, you can install all the programs and run the tests 
but not install.
@@ -8808,7 +8809,7 @@ A variable defined like this will be known as long as 
this shell or terminal is
 Other terminals will have no idea it existed.
 The main advantage of shell variables is that if they are exported@footnote{By 
running @command{$ export myvariable=a_test_value} instead of the simpler case 
in the text}, subsequent programs that are run within that shell can access 
their value.
 So by changing their value, you can change the ``environment'' of a program 
which uses them.
-The shell variables which are accessed by programs are therefore known as 
``environment variables''@footnote{You can use shell variables for other 
actions too, for example to temporarily keep some names or run loops on some 
files.}.
+The shell variables which are accessed by programs are therefore known as 
``environment variables''@footnote{You can use shell variables for other 
actions too, for example, to temporarily keep some names or run loops on some 
files.}.
 You can see the full list of exported variables that your shell recognizes by 
running:
 
 @example
@@ -8866,7 +8867,7 @@ There is a special startup file for every significant 
starting step:
 
 @cindex GNU Bash
 @item @file{/etc/profile} and everything in @file{/etc/profile.d/}
-These startup scripts are called when your whole system starts (for example 
after you turn on your computer).
+These startup scripts are called when your whole system starts (for example, 
after you turn on your computer).
 Therefore you need administrator or root privileges to access or modify them.
 
 @item @file{~/.bash_profile}
@@ -8905,7 +8906,7 @@ $ ./configure --prefix ~/.local
 @cindex Default library search directory
 You can install everything (including libraries like GSL, CFITSIO, or WCSLIB 
which are Gnuastro's mandatory dependencies, see @ref{Mandatory dependencies}) 
locally by configuring them as above.
 However, recall that @command{PATH} is only for executable files, not 
libraries and that libraries can also depend on other libraries.
-For example WCSLIB depends on CFITSIO and Gnuastro needs both.
+for example, WCSLIB depends on CFITSIO and Gnuastro needs both.
 Therefore, when you installed a library in a non-recognized directory, you 
have to guide the program that depends on them to look into the necessary 
library and header file directories.
 To do that, you have to define the @command{LDFLAGS} and @command{CPPFLAGS} 
environment variables respectively.
 This can be done while calling @file{./configure} as shown below:
@@ -8956,7 +8957,7 @@ export 
LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
 @end example
 
 @noindent
-This is good when a library, for example CFITSIO, is already present on the 
system, but the system-wide install was not configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you do not have administrator or root access to update it on the whole 
system/server.
+This is good when a library, for example, CFITSIO, is already present on the 
system, but the system-wide install was not configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you do not have administrator or root access to update it on the whole 
system/server.
 If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first 
(like above), the linker will first find the CFITSIO you installed for yourself 
and link with it.
 It thus will never reach the system-wide installation.
 
@@ -8967,7 +8968,7 @@ So if you choose to search in your customized directory 
first, please @emph{be s
 @noindent
 @strong{Summary:} When you are using a server which does not give you 
administrator/root access AND you would like to give priority to your own built 
programs and libraries, not the version that is (possibly already) present on 
the server, add these lines to your startup file.
 See above for which startup file is best for your case and for a detailed 
explanation on each.
-Do Not forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example `@file{/home/your-id}'):
+Do Not forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example, `@file{/home/your-id}'):
 
 @example
 export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
@@ -8988,7 +8989,7 @@ Everything else will be the same as a standard build and 
install, see @ref{Quick
 
 @cindex Executable names
 @cindex Names of executables
-At first sight, the names of the executables for each program might seem to be 
uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
+At first sight, the names of the executables for each program might seem to be 
uncommonly long, for example, @command{astnoisechisel} or @command{astcrop}.
 We could have chosen terse (and cryptic) names like most programs do.
 We chose this complete naming convention (something like the commands in 
@TeX{}) so you do not have to spend too much time remembering what the name of 
a specific program was.
 Such complete names also enable you to easily search for the programs.
@@ -9018,7 +9019,7 @@ A similar reasoning can be given for option names in 
scripts: it is good practic
 
 @cindex Symbolic link
 The simplest solution is making a symbolic link to the actual executable.
-For example let's assume you want to type @file{ic} to run Crop instead of 
@file{astcrop}.
+for example, let's assume you want to type @file{ic} to run Crop instead of 
@file{astcrop}.
 Assuming you installed Gnuastro executables in @file{/usr/local/bin} (default) 
you can do this simply by running the following command as root:
 
 @example
@@ -9142,7 +9143,7 @@ It thus does not have all the common option behaviors or 
configuration files for
 does not accept an @key{=} sign between the options and their values.
 It also needs at least one character between the option and its value.
 Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while 
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} are not.
-Finally multiple short option names cannot be merged: for example you can say 
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not 
acceptable.
+Finally multiple short option names cannot be merged: for example, you can say 
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not 
acceptable.
 @end cartouche
 
 @cartouche
@@ -9181,7 +9182,7 @@ After running GNU Autoconf, the version will be updated 
and you need to do a cle
 @itemx --debug
 @cindex Valgrind
 @cindex GNU Debugger (GDB)
-Build with debugging flags (for example to use in GNU Debugger, also known as 
GDB, or Valgrind), disable optimization and also the building of shared 
libraries.
+Build with debugging flags (for example, to use in GNU Debugger, also known as 
GDB, or Valgrind), disable optimization and also the building of shared 
libraries.
 Similar to running the configure script of below
 
 @example
@@ -9205,7 +9206,7 @@ As the name suggests (Make has an identical option), the 
number given to this op
 @item -C
 @itemx --check
 After finishing the build, also run @command{make check}.
-By default, @command{make check} is not run because the developer usually has 
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
+By default, @command{make check} is not run because the developer usually has 
their own checks to work on (for example, defined in 
@file{tests/during-dev.sh}).
 
 @item -i
 @itemx --install
@@ -9219,7 +9220,7 @@ This can be useful for archiving, or sending to 
colleagues who do not use Git fo
 @item -u STR
 @item --upload STR
 Activate the @option{--dist} (@option{-D}) option, then use secure copy 
(@command{scp}, part of the SSH tools) to copy the tarball and PDF to the 
@file{src} and @file{pdf} sub-directories of the specified server and its 
directory (value to this option).
-For example @command{--upload my-server:dir}, will copy the tarball in the 
@file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
+for example, @command{--upload my-server:dir}, will copy the tarball in the 
@file{dir/src}, and the PDF manual in @file{dir/pdf} of @code{my-server} server.
 It will then make a symbolic link in the top server directory to the tarball 
that is called @file{gnuastro-latest.tar.lz}.
 
 @item -p
@@ -9407,9 +9408,9 @@ See @ref{Installation directory} and @ref{Linking} to 
learn more on the @code{PA
 
 @cindex GPL Ghostscript
 @item
-@command{$ make check}: @emph{The tests relying on external programs (for 
example @file{fitstopdf.sh} fail.)} This is probably due to the fact that the 
version number of the external programs is too old for the tests we have 
preformed.
+@command{$ make check}: @emph{The tests relying on external programs (for 
example, @file{fitstopdf.sh} fail.)} This is probably due to the fact that the 
version number of the external programs is too old for the tests we have 
preformed.
 Please update the program to a more recent version.
-For example to create a PDF image, you will need GPL Ghostscript, but older 
versions do not work, we have successfully tested it on version 9.15.
+for example, to create a PDF image, you will need GPL Ghostscript, but older 
versions do not work, we have successfully tested it on version 9.15.
 Older versions might cause a failure in the test result.
 
 @item
@@ -9490,10 +9491,10 @@ Gnuastro's programs have multiple ways to help you 
refresh your memory in multip
 See @ref{Getting help} for more for more on benefiting from this very 
convenient feature.
 
 Many of the programs use the multi-threaded character of modern CPUs, in 
@ref{Multi-threaded operations} we will discuss how you can configure this 
behavior, along with some tips on making best use of them.
-In @ref{Numeric data types}, we will review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{For 
example if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
+In @ref{Numeric data types}, we will review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{for 
example, if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
 It takes four times less space and is much faster to process.} can greatly 
improve the file size and also speed of reading, writing or processing them.
 
-We Will then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
+We will then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
 Finally, we will take a look at the behavior regarding output files: 
@ref{Automatic output} describes how the programs set a default name for their 
output when you do not give one explicitly (using @option{--output}).
 When the output is a FITS file, all the programs also store some very useful 
information in the header that is discussed in @ref{Output FITS files}.
 
@@ -9557,7 +9558,7 @@ Through options you can configure how the program runs 
(interprets the data you
 @vindex --usage
 @cindex Mandatory arguments
 Arguments can be mandatory and optional and unlike options, they do not have 
any identifiers.
-Hence, when there multiple arguments, their order might also matter (for 
example in @command{cp} which is used for copying one file to another location).
+Hence, when there multiple arguments, their order might also matter (for 
example, in @command{cp} which is used for copying one file to another 
location).
 The outputs of @option{--usage} and @option{--help} shows which arguments are 
optional and which are mandatory, see @ref{--usage}.
 
 As their name suggests, @emph{options} can be considered to be optional and 
most of the time, you do not have to worry about what order you specify them in.
@@ -9599,7 +9600,7 @@ Everything particular about how a program treats 
arguments, is explained under t
 @cindex FITS filename suffixes
 Generally, if there is a standard file name suffix for a particular format, 
that filename extension is checked to identify their format.
 In astronomy (and thus Gnuastro), FITS is the preferred format for inputs and 
outputs, so the focus here and throughout this book is on FITS.
-However, other formats are also accepted in special cases, for example 
@ref{ConvertType} also accepts JPEG or TIFF inputs, and writes JPEG, EPS or PDF 
files.
+However, other formats are also accepted in special cases, for example, 
@ref{ConvertType} also accepts JPEG or TIFF inputs, and writes JPEG, EPS or PDF 
files.
 The recognized suffixes for these formats are listed there.
 
 The list below shows the recognized suffixes for FITS data files in Gnuastro's 
programs.
@@ -9645,8 +9646,8 @@ CFITSIO has its own error reporting techniques, if your 
input file(s) cannot be
 @cindex Options, short (@option{-}) and long (@option{--})
 Command-line options allow configuring the behavior of a program in all 
GNU/Linux applications for each particular execution on a particular input data.
 A single option can be called in two ways: @emph{long} or @emph{short}.
-All options in Gnuastro accept the long format which has two hyphens an can 
have many characters (for example @option{--hdu}).
-Short options only have one hyphen (@key{-}) followed by one character (for 
example @option{-h}).
+All options in Gnuastro accept the long format which has two hyphens an can 
have many characters (for example, @option{--hdu}).
+Short options only have one hyphen (@key{-}) followed by one character (for 
example, @option{-h}).
 You can see some examples in the list of options in @ref{Common options} or 
those for each program's ``Invoking ProgramName'' section.
 Both formats are shown for those which support both.
 First the short is shown then the long.
@@ -9662,7 +9663,7 @@ Some options need to be given a value if they are called 
and some do not.
 You can think of the latter type of options as on/off options.
 These two types of options can be distinguished using the output of the 
@option{--help} and @option{--usage} options, which are common to all GNU 
software, see @ref{Getting help}.
 In Gnuastro we use the following strings to specify when the option needs a 
value and what format that value should be in.
-More specific tests will be done in the program and if the values are out of 
range (for example negative when the program only wants a positive value), an 
error will be reported.
+More specific tests will be done in the program and if the values are out of 
range (for example, negative when the program only wants a positive value), an 
error will be reported.
 
 @vtable @option
 
@@ -9676,12 +9677,12 @@ If they are for fractions, they will have to be less 
than or equal to unity.
 
 @item STR
 The value is read as a string of characters.
-For example column names in a table, or HDU names in a multi-extension FITS 
file.
+for example, column names in a table, or HDU names in a multi-extension FITS 
file.
 Other examples include human-readable settings by some programs like the 
@option{--domain} option of the Convolve program that can be either 
@code{spatial} or @code{frequency} (to specify the type of convolution, see 
@ref{Convolve}).
 
 @item FITS @r{or} FITS/TXT
 The value should be a file (most commonly FITS).
-In many cases, other formats may also be accepted (for example input tables 
can be FITS or plain-text, see @ref{Recognized table formats}).
+In many cases, other formats may also be accepted (for example, input tables 
can be FITS or plain-text, see @ref{Recognized table formats}).
 
 @end vtable
 
@@ -9690,9 +9691,9 @@ In many cases, other formats may also be accepted (for 
example input tables can
 @cindex Option values
 To specify a value in the short format, simply put the value after the option.
 Note that since the short options are only one character long, you do not have 
to type anything between the option and its value.
-For the long option you either need white space or an @option{=} sign, for 
example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are 
all equivalent.
+For the long option you either need white space or an @option{=} sign, for 
example, @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are 
all equivalent.
 
-The short format of on/off options (those that do not need values) can be 
concatenated for example these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
+The short format of on/off options (those that do not need values) can be 
concatenated for example, these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
 As an example, consider the following command to run Crop:
 @example
 $ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
@@ -9721,7 +9722,7 @@ If you are satisfied with the change, you can remove the 
original option for hum
 If the change was not satisfactory, you can remove the one you just added and 
not worry about forgetting the original value.
 Without this capability, you would have to memorize or save the original value 
somewhere else, run the command and then change the value again which is not at 
all convenient and is potentially cause lots of bugs.
 
-On the other hand, some options can be called multiple times in one run of a 
program and can thus take multiple values (for example see the 
@option{--column} option in @ref{Invoking asttable}.
+On the other hand, some options can be called multiple times in one run of a 
program and can thus take multiple values (for example, see the 
@option{--column} option in @ref{Invoking asttable}.
 In these cases, the order of stored values is the same order that you 
specified on the command-line.
 
 @cindex Configuration files
@@ -9735,7 +9736,7 @@ These configuration files are fully described in 
@ref{Configuration files}.
 @noindent
 @cindex Tilde expansion as option values
 @strong{CAUTION:} In specifying a file address, if you want to use the shell's 
tilde expansion (@command{~}) to specify your home directory, leave at least 
one space between the option name and your value.
-For example use @command{-o ~/test}, @command{--output ~/test} or 
@command{--output= ~/test}.
+for example, use @command{-o ~/test}, @command{--output ~/test} or 
@command{--output= ~/test}.
 Calling them with @command{-o~/test} or @command{--output=~/test} will disable 
shell expansion.
 @end cartouche
 @cartouche
@@ -9781,10 +9782,10 @@ Number of micro-seconds to wait for writing/typing in 
the @emph{first line} of s
 This is only relevant for programs that also accept input from the standard 
input, @emph{and} you want to manually write/type the contents on the terminal.
 When the standard input is already connected to a pipe (output of another 
program), there will not be any waiting (hence no timeout, thus making this 
option redundant).
 
-If the first line-break (for example with the @key{ENTER} key) is not provided 
before the timeout, the program will abort with an error that no input was 
given.
+If the first line-break (for example, with the @key{ENTER} key) is not 
provided before the timeout, the program will abort with an error that no input 
was given.
 Note that this time interval is @emph{only} for the first line that you type.
 Once the first line is given, the program will assume that more data will come 
and accept rest of your inputs without any time limit.
-You need to specify the ending of the standard input, for example by pressing 
@key{CTRL-D} after a new line.
+You need to specify the ending of the standard input, for example, by pressing 
@key{CTRL-D} after a new line.
 
 Note that any input you write/type into a program on the command-line with 
Standard input will be discarded (lost) once the program is finished.
 It is only recoverable manually from your command-line (where you actually 
typed) as long as the terminal is open.
@@ -9859,7 +9860,7 @@ For more on the different formalisms, please see Section 
8.1 of the FITS standar
 In short, in the @code{PCi_j} formalism, we only keep the linear rotation 
matrix in these keywords and put the scaling factor (or the pixel scale in 
astronomical imaging) in the @code{CDELTi} keywords.
 In the @code{CDi_j} formalism, we blend the scaling into the rotation into a 
single matrix and keep that matrix in these FITS keywords.
 By default, Gnuastro uses the @code{PCi_j} formalism, because it greatly helps 
in human readability of the raw keywords and is also the default mode of WCSLIB.
-However, in some circumstances it may be necessary to have the keywords in the 
CD format; for example when you need to feed the outputs into other software 
that do not follow the full FITS standard and only recognize the @code{CDi_j} 
formalism.
+However, in some circumstances it may be necessary to have the keywords in the 
CD format; for example, when you need to feed the outputs into other software 
that do not follow the full FITS standard and only recognize the @code{CDi_j} 
formalism.
 
 @table @command
 @item txt
@@ -9893,7 +9894,7 @@ To ensure this default behavior, the pre-defined value to 
this option is an extr
 Please note that using a non-volatile file (in the HDD/SDD) instead of RAM can 
significantly increase the program's running time, especially on HDDs (where 
read/write is slower).
 Also, note that the number of memory-mapped files that your kernel can support 
is limited.
 So when this option is necessary, it is best to give it values larger than 1 
megabyte (@option{--minmapsize=1000000}).
-You can then decrease it for a specific program's invocation on a large input 
after you see memory issues arise (for example an error, or the program not 
aborting and fully consuming your memory).
+You can then decrease it for a specific program's invocation on a large input 
after you see memory issues arise (for example, an error, or the program not 
aborting and fully consuming your memory).
 If you see randomly named files remaining in this directory when the program 
finishes normally, please send us a bug report so we address the problem, see 
@ref{Report a bug}.
 
 @cartouche
@@ -9913,7 +9914,7 @@ for more.
 The size of regular tiles for tessellation, see @ref{Tessellation}.
 For each dimension an integer length (in units of data-elements or pixels) is 
necessary.
 If the number of input dimensions is different from the number of values given 
to this option, the program will stop with an error.
-Values must be separated by commas (@key{,}) and can also be fractions (for 
example @code{4/2}).
+Values must be separated by commas (@key{,}) and can also be fractions (for 
example, @code{4/2}).
 If they are fractions, the result must be an integer, otherwise an error will 
be printed.
 
 @item -M INT[,INT[,...]]
@@ -9939,7 +9940,7 @@ Note that the tile IDs start from 0.
 See @ref{Tessellation} for more on Tiling an image in Gnuastro.
 
 @item --oneelempertile
-When showing the tile values (for example with @option{--checktiles}, or when 
the program's output is tessellated) only use one element for each tile.
+When showing the tile values (for example, with @option{--checktiles}, or when 
the program's output is tessellated) only use one element for each tile.
 This can be useful when only the relative values given to each tile compared 
to the rest are important or need to be checked.
 Since the tiles usually have a large number of pixels within them the output 
will be much smaller, and so easier to read, write, store, or send.
 
@@ -9948,7 +9949,7 @@ But with this option, all displayed values are going to 
have the (same) size of
 Hence, in such cases, the image proportions are going to be slightly different 
with this option.
 
 If your input image is not exactly divisible by the tile size and you want one 
value per tile for some higher-level processing, all is not lost though.
-You can see how many pixels were within each tile (for example to weight the 
values or discard some for later processing) with Gnuastro's Statistics (see 
@ref{Statistics}) as shown below.
+You can see how many pixels were within each tile (for example, to weight the 
values or discard some for later processing) with Gnuastro's Statistics (see 
@ref{Statistics}) as shown below.
 The output FITS file is going to have two extensions, one with the median 
calculated on each tile and one with the number of elements that each tile 
covers.
 You can then use the @code{where} operator in @ref{Arithmetic} to set the 
values of all tiles that do not have the regular area to a blank value.
 
@@ -10050,7 +10051,7 @@ As an example, you can give your full command-line 
options and even the input an
 If everything is OK, you can just run the same command (easily retrieved from 
the shell history, with the top arrow key) and simply remove the last two 
characters that showed this option.
 
 No program will actually start its processing when this option is called.
-The otherwise mandatory arguments for each program (for example input image or 
catalog files) are no longer required when you call this option.
+The otherwise mandatory arguments for each program (for example, input image 
or catalog files) are no longer required when you call this option.
 
 @item --config=STR
 Parse @option{STR} as a configuration file name, immediately when this option 
is confronted (see @ref{Configuration files}).
@@ -10187,15 +10188,15 @@ For file arguments this is the default behavior 
already and you have probably us
 
 Besides this simple/default mode, Bash also enables a high level of 
customization features for its completion.
 These features have been extensively used in Gnuastro to improve your work 
efficiency@footnote{To learn how Gnuastro implements TAB completion in Bash, 
see @ref{Bash programmable completion}.}.
-For example if you are running @code{asttable} (which only accepts files 
containing a table), and you press @key{[TAB]}, it will only suggest files 
containing tables.
+for example, if you are running @code{asttable} (which only accepts files 
containing a table), and you press @key{[TAB]}, it will only suggest files 
containing tables.
 As another example, if an option needs image HDUs within a FITS file, pressing 
@key{[TAB]} will only suggest the image HDUs (and not other possibly existing 
HDUs that contain tables, or just metadata).
 Just note that the file name has to be already given on the command-line 
before reaching such options (that look into the contents of a file).
 
 But TAB completion is not limited to file types or contents.
-Arguments/Options that take certain fixed string values will directly suggest 
those strings with TAB, and completely ignore the file structure (for example 
spectral line names in @ref{Invoking astcosmiccal})!
+Arguments/Options that take certain fixed string values will directly suggest 
those strings with TAB, and completely ignore the file structure (for example, 
spectral line names in @ref{Invoking astcosmiccal})!
 As another example, the option @option{--numthreads} option (to specify the 
number of threads to use by the program), will find the number of available 
threads on the system, and suggest the possible numbers with a TAB!
 
-To activate Gnuastro's custom TAB completion in Bash, you need to put the 
following line in one of your Bash startup files (for example @file{~/.bashrc}).
+To activate Gnuastro's custom TAB completion in Bash, you need to put the 
following line in one of your Bash startup files (for example, 
@file{~/.bashrc}).
 If you installed Gnuastro using the steps of @ref{Quick start}, you should 
have already done this (the command just after @command{sudo make install}).
 For a list of (and discussion on) Bash startup files and installation 
directories see @ref{Installation directory}.
 Of course, if Gnuastro was installed in a custom location, replace the 
`@file{/usr/local}' part of the line below to the value that was given to 
@option{--prefix} during Gnuastro's configuration@footnote{In case you do not 
know the installation directory of Gnuastro on your system, you can find out 
with this command: @code{which astfits | sed -e"s|/bin/astfits||"}}.
@@ -10420,7 +10421,7 @@ If your system-wide default input extension is 0 (the 
first), then when you want
 With this progressive state of default values to check, you can set different 
default values for the different directories that you would like to run 
Gnuastro in for your different purposes, so you will not have to worry about 
this issue any more.
 
 The same can be said about the @file{gnuastro.conf} files: by specifying a 
behavior in this single file, all Gnuastro programs in the respective 
directory, user, or system-wide steps will behave similarly.
-For example to keep the input's directory when no specific output is given 
(see @ref{Automatic output}), or to not delete an existing file if it has the 
same name as a given output (see @ref{Input output options}).
+for example, to keep the input's directory when no specific output is given 
(see @ref{Automatic output}), or to not delete an existing file if it has the 
same name as a given output (see @ref{Input output options}).
 
 
 @node Current directory and User wide, System wide, Configuration file 
precedence, Configuration files
@@ -10519,7 +10520,7 @@ In the subsections below each of these methods are 
reviewed.
 If you give this option, the program will not run.
 It will only print a very concise message showing the options and arguments.
 Everything within square brackets (@option{[]}) is optional.
-For example here are the first and last two lines of Crop's @option{--usage} 
is shown:
+for example, here are the first and last two lines of Crop's @option{--usage} 
is shown:
 
 @example
 $ astcrop --usage
@@ -10534,7 +10535,7 @@ There are no explanations on the options, just their 
short and long names shown
 After the program name, the short format of all the options that do not 
require a value (on/off options) is displayed.
 Those that do require a value then follow in separate brackets, each 
displaying the format of the input they want, see @ref{Options}.
 Since all options are optional, they are shown in square brackets, but 
arguments can also be optional.
-For example in this example, a catalog name is optional and is only required 
in some modes.
+for example, in this example, a catalog name is optional and is only required 
in some modes.
 This is a standard method of displaying optional arguments for all GNU 
software.
 
 @node --help, Man pages, --usage, Getting help
@@ -10549,7 +10550,7 @@ Since the options are shown in detail afterwards, the 
first line of the @option{
 
 In the @option{--help} output of all programs in Gnuastro, the options for 
each program are classified based on context.
 The first two contexts are always options to do with the input and output 
respectively.
-For example input image extensions or supplementary input files for the inputs.
+for example, input image extensions or supplementary input files for the 
inputs.
 The last class of options is also fixed in all of Gnuastro, it shows operating 
mode options.
 Most of these options are already explained in @ref{Operating mode options}.
 
@@ -10599,7 +10600,7 @@ $ astnoisechisel --help > filename.txt
 @cindex Command-line searching text
 In case you have a special keyword you are looking for in the help, you do not 
have to go through the full list.
 GNU Grep is made for this job.
-For example if you only want the list of options whose @option{--help} output 
contains the word ``axis'' in Crop, you can run the following command:
+for example, if you only want the list of options whose @option{--help} output 
contains the word ``axis'' in Crop, you can run the following command:
 
 @example
 $ astcrop --help | grep axis
@@ -10610,7 +10611,7 @@ $ astcrop --help | grep axis
 @cindex Customize @option{--help} output
 @cindex @option{--help} output customization
 If the output of this option does not fit nicely within the confines of your 
terminal, GNU does enable you to customize its output through the environment 
variable @code{ARGP_HELP_FMT}, you can set various parameters which specify the 
formatting of the help messages.
-For example if your terminals are wider than 70 spaces (say 100) and you feel 
there is too much empty space between the long options and the short 
explanation, you can change these formats by giving values to this environment 
variable before running the program with the @option{--help} output.
+for example, if your terminals are wider than 70 spaces (say 100) and you feel 
there is too much empty space between the long options and the short 
explanation, you can change these formats by giving values to this environment 
variable before running the program with the @option{--help} output.
 You can define this environment variable in this manner:
 @example
 $ export ARGP_HELP_FMT=rmargin=100,opt-doc-col=20
@@ -10688,7 +10689,7 @@ $ info astprogramname
 
 @noindent
 you will be taken to the section titled ``Invoking ProgramName'' which 
explains the inputs and outputs along with the command-line options for that 
program.
-Finally, if you run Info with the official program name, for example Crop or 
NoiseChisel:
+Finally, if you run Info with the official program name, for example, Crop or 
NoiseChisel:
 
 @example
 $ info ProgramName
@@ -10760,7 +10761,7 @@ Thus @option{--numthreads} is the only common option in 
Gnuastro's programs with
 Spinning off threads is not necessarily the most efficient way to run an 
application.
 Creating a new thread is not a cheap operation for the operating system.
 It is most useful when the input data are fixed and you want the same 
operation to be done on parts of it.
-For example one input image to Crop and multiple crops from various parts of 
it.
+for example, one input image to Crop and multiple crops from various parts of 
it.
 In this fashion, the image is loaded into memory once, all the crops are 
divided between the number of threads internally and each thread cuts out those 
parts which are assigned to it from the same image.
 On the other hand, if you have multiple images and you want to crop the same 
region(s) out of all of them, it is much more efficient to set 
@option{--numthreads=1} (so no threads spin off) and run Crop multiple times 
simultaneously, see @ref{How to run simultaneous operations}.
 
@@ -10801,7 +10802,7 @@ If you want to run the same operation on 
@strong{multiple} data sets, it is best
 There are two@footnote{A third way would be to open multiple terminal emulator 
windows in your GUI, type the commands separately on each and press @key{Enter} 
once on each terminal, but this is far too frustrating, tedious and prone to 
errors.
 It's therefore not a realistic solution when tens, hundreds or thousands of 
operations (your research targets, multiplied by the operations you do on each) 
are to be done.} approaches to simultaneously execute a program: using GNU 
Parallel or Make (GNU Make is the most common implementation).
 The first is very useful when you only want to do one job multiple times and 
want to get back to your work without actually keeping the command you ran.
-The second is usually for more important operations, with lots of dependencies 
between the different products (for example a full scientific research).
+The second is usually for more important operations, with lots of dependencies 
between the different products (for example, a full scientific research).
 
 @table @asis
 
@@ -10839,7 +10840,7 @@ As you are able to get promising results, you improve 
the method and use it on a
 In the process, you will confront many issues that have to be corrected (bugs 
in software development jargon).
 Make is a wonderful tool to manage this style of development.
 
-Besides the raw data analysis pipeline, Make has been used to for producing 
reproducible papers, for example see 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} 
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
+Besides the raw data analysis pipeline, Make has been used to for producing 
reproducible papers, for example, see 
@url{https://gitlab.com/makhlaghi/NoiseChisel-paper, the reproduction pipeline} 
of the paper introducing @ref{NoiseChisel} (one of Gnuastro's programs).
 In fact the NoiseChisel paper's Make-based workflow was the foundation of a 
parallel project called @url{http://maneage.org,Maneage} (@emph{Man}aging data 
lin@emph{eage}): @url{http://maneage.org} that is described more fully in 
Akhlaghi et al. @url{https://arxiv.org/abs/2006.03018, 2021}.
 Therefore, it is a very useful tool for complex scientific workflows.
 
@@ -10848,7 +10849,7 @@ GNU 
Make@footnote{@url{https://www.gnu.org/software/make/}} is the most common i
 Make is very basic and simple, and thus the manual is short (the most 
important parts are in the first roughly 100 pages) and easy to read/understand.
 
 Make comes with a @option{--jobs} (@option{-j}) option which allows you to 
specify the maximum number of jobs that can be done simultaneously.
-For example if you have 8 threads available to your operating system.
+for example, if you have 8 threads available to your operating system.
 You can run:
 
 @example
@@ -10889,7 +10890,7 @@ In the `signed' convention, one bit is reserved for the 
sign (stating that the i
 The `unsigned' integers use that bit in the actual number and thus contain 
only positive numbers (starting from zero).
 
 Therefore, at the same number of bits, both signed and unsigned integers can 
allow the same number of integers, but the positive limit of the 
@code{unsigned} types is double their @code{signed} counterparts with the same 
width (at the expense of not having negative numbers).
-When the context of your work does not involve negative numbers (for example 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
+When the context of your work does not involve negative numbers (for example, 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
 For the full numerical range of all integer types, see below.
 
 Another standard of converting a given number of bits to numbers is the 
floating point standard, this standard can @emph{approximately} store any real 
number with a given precision.
@@ -10900,14 +10901,14 @@ If you are interested to learn more about it, you can 
start with the @url{https:
 
 Practically, you can use Gnuastro's Arithmetic program to convert/change the 
type of an image/datacube (see @ref{Arithmetic}), or Gnuastro Table program to 
convert a table column's data type (see @ref{Column arithmetic}).
 Conversion of a dataset's type is necessary in some contexts.
-For example the program/library, that you intend to feed the data into, only 
accepts floating point values, but you have an integer image/column.
+for example, the program/library, that you intend to feed the data into, only 
accepts floating point values, but you have an integer image/column.
 Another situation that conversion can be helpful is when you know that your 
data only has values that fit within @code{int8} or @code{uint16}.
 However it is currently formatted in the @code{float64} type.
 
 The important thing to consider is that operations involving wider, floating 
point, or signed types can be significantly slower than smaller-width, integer, 
or unsigned types respectively.
 Note that besides speed, a wider type also requires much more storage space 
(by 4 or 8 times).
 Therefore, when you confront such situations that can be optimized and want to 
store/archive/transfer the data, it is best to use the most efficient type.
-For example if your dataset (image or table column) only has positive integers 
less than 65535, store it as an unsigned 16-bit integer for faster processing, 
faster transfer, and less storage space.
+for example, if your dataset (image or table column) only has positive 
integers less than 65535, store it as an unsigned 16-bit integer for faster 
processing, faster transfer, and less storage space.
 
 The short and long names for the recognized numeric data types in Gnuastro are 
listed below.
 Both short and long names can be used when you want to specify a type.
@@ -10967,12 +10968,12 @@ Given the heavy noise in astronomical data, this is 
usually more than sufficient
 64-bit (double-precision) floating point types.
 The maximum (minimum is its negative) possible value is @mymath{\sim10^{308}}.
 Double-precision floating points can accurately represent a floating point 
number @mymath{\sim15.9} significant decimals.
-This is usually good for processing (mixing) the data internally, for example 
a sum of single precision data (and later storing the result as @code{float32}).
+This is usually good for processing (mixing) the data internally, for example, 
a sum of single precision data (and later storing the result as @code{float32}).
 @end table
 
 @cartouche
 @noindent
-@strong{Some file formats do not recognize all types.} For example the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
+@strong{Some file formats do not recognize all types.} for example, the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
 When a type is not acceptable for output into a given file format, the 
respective Gnuastro program or library will let you know and abort.
 On the command-line, you can convert the numerical type of an image, or table 
column into another type with @ref{Arithmetic} or @ref{Table} respectively.
 If you are writing your own program, you can use the 
@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see 
@ref{Copying datasets}.
@@ -11017,7 +11018,7 @@ As soon as an intermediate dataset is no longer 
necessary for the next internal
 This allows Gnuastro programs to minimize their usage of your system's RAM 
over the full running time.
 
 The situation gets complicated when the datasets are large (compared to your 
available RAM when the program is run).
-For example if a dataset is half the size of your system's available RAM, and 
the program's internal analysis needs three or more intermediately processed 
copies of it at one moment in its analysis.
+for example, if a dataset is half the size of your system's available RAM, and 
the program's internal analysis needs three or more intermediately processed 
copies of it at one moment in its analysis.
 There will not be enough RAM to keep those higher-level intermediate data.
 In such cases, programs that do not do any memory management will crash.
 But fortunately Gnuastro's programs do have a memory management plans for such 
situations.
@@ -11053,7 +11054,7 @@ astarithmetic: ./gnuastro_mmap/Fu7Dhs: deleted
 To disable these messages, you can run the program with @code{--quietmmap}, or 
set the @code{quietmmap} variable in the allocating library function to be 
non-zero.
 
 An important component of these messages is the name of the memory-mapped file.
-Knowing that the file has been deleted is important for the user if the 
program crashes for any reason: internally (for example a parameter is given 
wrongly) or externally (for example you mistakenly kill the running job).
+Knowing that the file has been deleted is important for the user if the 
program crashes for any reason: internally (for example, a parameter is given 
wrongly) or externally (for example, you mistakenly kill the running job).
 In the event of a crash, the memory-mapped files will not be deleted and you 
have to manually delete them because they are usually large and they may soon 
fill your full storage if not deleted in a long time due to successive crashes.
 
 This brings us to managing the memory-mapped files in your non-volatile memory.
@@ -11088,16 +11089,16 @@ The programs will delete a memory-mapped file when it 
is no longer needed, but t
 So if your project involves many Gnuastro programs (possibly called in 
parallel) and you want your memory-mapped files to be in a different location, 
you just have to make the symbolic link above once at the start, and all the 
programs will use it if necessary.
 
 Another memory-management scenario that may happen is this: you do not want a 
Gnuastro program to allocate internal datasets in the RAM at all.
-For example the speed of your Gnuastro-related project does not matter at that 
moment, and you have higher-priority jobs that are being run at the same time 
which need to have RAM available.
+for example, the speed of your Gnuastro-related project does not matter at 
that moment, and you have higher-priority jobs that are being run at the same 
time which need to have RAM available.
 In such cases, you can use the @option{--minmapsize} option that is available 
in all Gnuastro programs (see @ref{Processing options}).
 Any intermediate dataset that has a size larger than the value of this option 
will be memory-mapped, even if there is space available in your RAM.
-For example if you want any dataset larger than 100 megabytes to be 
memory-mapped, use @option{--minmapsize=100000000} (8 zeros!).
+for example, if you want any dataset larger than 100 megabytes to be 
memory-mapped, use @option{--minmapsize=100000000} (8 zeros!).
 
 @cindex Linux kernel
 @cindex Kernel, Linux
 You should not set the value of @option{--minmapsize} to be too small, 
otherwise even small intermediate values (that are usually very numerous) in 
the program will be memory-mapped.
 However the kernel can only host a limited number of memory-mapped files at 
every moment (by all running programs combined).
-For example in the default@footnote{If you need to host more memory-mapped 
files at one moment, you need to build your own customized Linux kernel.} Linux 
kernel on GNU/Linux operating systems this limit is roughly 64000.
+for example, in the default@footnote{If you need to host more memory-mapped 
files at one moment, you need to build your own customized Linux kernel.} Linux 
kernel on GNU/Linux operating systems this limit is roughly 64000.
 If the total number of memory-mapped files exceeds this number, all the 
programs using them will crash.
 Gnuastro's programs will warn you if your given value is too small and may 
cause a problem later.
 
@@ -11124,7 +11125,7 @@ One such property can be the object's magnitude, which 
is the sum of pixels with
 Many such properties can be derived from the raw pixel values and their 
position, see @ref{Invoking astmkcatalog} for a long list.
 
 As a summary, for each labeled region (or, galaxy) we have one @emph{row} and 
for each measured property we have one @emph{column}.
-This high-level structure is usually the first step for higher-level analysis, 
for example finding the stellar mass or photometric redshift from magnitudes in 
multiple colors.
+This high-level structure is usually the first step for higher-level analysis, 
for example, finding the stellar mass or photometric redshift from magnitudes 
in multiple colors.
 Thus, tables are not just outputs of programs, in fact it is much more common 
for tables to be inputs of programs.
 For example, to make a mock galaxy image, you need to feed in the properties 
of each galaxy into @ref{MakeProfiles} for it do the inverse of the process 
above and make a simulated image from a catalog, see @ref{Sufi simulates a 
detection}.
 In other cases, you can feed a table into @ref{Crop} and it will crop out 
regions centered on the positions within the table, see @ref{Reddest clumps 
cutouts and parallelization}.
@@ -11165,7 +11166,7 @@ It is fully described in @ref{Gnuastro text table 
format}.
 @cindex ASCII table, FITS
 @item FITS ASCII tables
 The FITS ASCII table extension is fully in ASCII encoding and thus easily 
readable on any text editor (assuming it is the only extension in the FITS 
file).
-If the FITS file also contains binary extensions (for example an image or 
binary table extensions), then there will be many hard to print characters.
+If the FITS file also contains binary extensions (for example, an image or 
binary table extensions), then there will be many hard to print characters.
 The FITS ASCII format does not have new line characters to separate rows.
 In the FITS ASCII table standard, each row is defined as a fixed number of 
characters (value to the @code{NAXIS1} keyword), so to visually inspect it 
properly, you would have to adjust your text editor's width to this value.
 All columns start at given character positions and have a fixed width (number 
of characters).
@@ -11177,7 +11178,7 @@ One problem with the binary format on the other hand is 
that it is not portable
 But since ASCII characters are defined on a byte and are well recognized, they 
are better for portability on those various systems.
 Gnuastro's plain text table format described below is much more portable and 
easier to read/write/interpret by humans manually.
 
-Generally, as the name implies, this format is useful for when your table 
mainly contains ASCII columns (for example file names, or descriptions).
+Generally, as the name implies, this format is useful for when your table 
mainly contains ASCII columns (for example, file names, or descriptions).
 They can be useful when you need to include columns with structured ASCII 
information along with other extensions in one FITS file.
 In such cases, you can also consider header keywords (see @ref{Fits}).
 
@@ -11193,7 +11194,7 @@ But if you keep this same number (to the approximate 
precision possible) as a 4-
 When catalogs contain thousands/millions of rows in tens/hundreds of columns, 
this can lead to significant improvements in memory/band-width usage.
 Moreover, since the CPU does its operations in the binary formats, reading the 
table in and writing it out is also much faster than an ASCII table.
 
-When you are dealing with integer numbers, the compression ratio can be even 
better, for example if you know all of the values in a column are positive and 
less than @code{255}, you can use the @code{unsigned char} type which only 
takes one byte! If they are between @code{-128} and @code{127}, then you can 
use the (signed) @code{char} type.
+When you are dealing with integer numbers, the compression ratio can be even 
better, for example, if you know all of the values in a column are positive and 
less than @code{255}, you can use the @code{unsigned char} type which only 
takes one byte! If they are between @code{-128} and @code{127}, then you can 
use the (signed) @code{char} type.
 So if you are thoughtful about the limits of your integer columns, you can 
greatly reduce the size of your file and also the speed at which it is 
read/written.
 This can be very useful when sharing your results with collaborators or 
publishing them.
 To decrease the file size even more you can name your output as ending in 
@file{.fits.gz} so it is also compressed after creation.
@@ -11218,7 +11219,7 @@ The delimiters (or characters separating the columns) 
are white space characters
 The only further requirement is that all rows/lines must have the same number 
of columns.
 
 The columns do not have to be exactly under each other and the rows can be 
arbitrarily long with different lengths.
-For example the following contents in a file would be interpreted as a table 
with 4 columns and 2 rows, with each element interpreted as a @code{double} 
type (see @ref{Numeric data types}).
+for example, the following contents in a file would be interpreted as a table 
with 4 columns and 2 rows, with each element interpreted as a @code{double} 
type (see @ref{Numeric data types}).
 
 @example
 1     2.234948   128   39.8923e8
@@ -11232,12 +11233,12 @@ This can become frustrating and prone to bad errors 
(getting the columns wrong)
 It is also bad for sending to a colleague, because they will find it hard to 
remember/use the columns properly.
 
 To solve these problems in Gnuastro's programs/libraries you are not limited 
to using the column's number, see @ref{Selecting table columns}.
-If the columns have names, units, or comments you can also select your columns 
based on searches/matches in these fields, for example see @ref{Table}.
+If the columns have names, units, or comments you can also select your columns 
based on searches/matches in these fields, for example, see @ref{Table}.
 Also, in this manner, you cannot guide the program reading the table on how to 
read the numbers.
 As an example, the first and third columns above can be read as integer types: 
the first column might be an ID and the third can be the number of pixels an 
object occupies in an image.
 So there is no need to read these to columns as a @code{double} type (which 
takes more memory, and is slower).
 
-In the bare-minimum example above, you also cannot use strings of characters, 
for example the names of filters, or some other identifier that includes 
non-numerical characters.
+In the bare-minimum example above, you also cannot use strings of characters, 
for example, the names of filters, or some other identifier that includes 
non-numerical characters.
 In the absence of any information, only numbers can be read robustly.
 Assuming we read columns with non-numerical characters as string, there would 
still be the problem that the strings might contain space (or any delimiter) 
character for some rows.
 So, each `word' in the string will be interpreted as a column and the program 
will abort with an error that the rows do not have the same number of columns.
@@ -11259,11 +11260,11 @@ A column information comment is assumed to have the 
following format:
 @noindent
 Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted 
as the column name (so it can contain anything except the `@key{[}' character).
 Anything between the `@key{]}' and the end of the line is defined as a comment.
-Within the brackets, anything before the first `@key{,}' is the units 
(physical units, for example km/s, or erg/s), anything before the second 
`@key{,}' is the short type identifier (see below, and @ref{Numeric data 
types}).
+Within the brackets, anything before the first `@key{,}' is the units 
(physical units, for example, km/s, or erg/s), anything before the second 
`@key{,}' is the short type identifier (see below, and @ref{Numeric data 
types}).
 If the type identifier is not recognized, the default 64-bit floating point 
type will be used.
 
 Finally (still within the brackets), any non-white characters after the second 
`@key{,}' are interpreted as the blank value for that column (see @ref{Blank 
pixels}).
-The blank value can either be in the same type as the column (for example 
@code{-99} for a signed integer column), or any string (for example @code{NaN} 
in that same column).
+The blank value can either be in the same type as the column (for example, 
@code{-99} for a signed integer column), or any string (for example, @code{NaN} 
in that same column).
 In both cases, the values will be stored in memory as Gnuastro's fixed blank 
values for each type.
 For floating point types, Gnuastro's internal blank value is IEEE NaN 
(Not-a-Number).
 For signed integers, it is the smallest possible value and for unsigned 
integers its the largest possible value.
@@ -11271,7 +11272,7 @@ For signed integers, it is the smallest possible value 
and for unsigned integers
 When a formatting problem occurs, or when the column was already given 
meta-data in a previous comment, or when the column number is larger than the 
actual number of columns in the table (the non-commented or empty lines), then 
the comment information line will be ignored.
 
 When a comment information line can be used, the leading and trailing white 
space characters will be stripped from all of the elements.
-For example in this line:
+for example, in this line:
 
 @example
 # Column 5:  column name   [km/s,    f32,-99] Redshift as speed
@@ -11315,7 +11316,7 @@ Therefore, the only time you have to pay attention to 
the positioning and spaces
 
 The only limitation in this format is that trailing and leading white space 
characters will be removed from the columns that are read.
 In most cases, this is the desired behavior, but if trailing and leading 
white-spaces are critically important to your analysis, define your own 
starting and ending characters and remove them after the table has been read.
-For example in the sample table below, the two `@key{|}' characters (which are 
arbitrary) will remain in the value of the second column and you can remove 
them manually later.
+for example, in the sample table below, the two `@key{|}' characters (which 
are arbitrary) will remain in the value of the second column and you can remove 
them manually later.
 If only one of the leading or trailing white spaces is important for your 
work, you can only use one of the `@key{|}'s.
 
 @example
@@ -11330,7 +11331,7 @@ If only one of the leading or trailing white spaces is 
important for your work,
 Note that the FITS binary table standard does not define the @code{unsigned 
int} and @code{unsigned long} types, so if you want to convert your tables to 
FITS binary tables, use other types.
 Also, note that in the FITS ASCII table, there is only one integer type 
(@code{long}).
 So if you convert a Gnuastro plain text table to a FITS ASCII table with the 
@ref{Table} program, the type information for integers will be lost.
-Conversely if integer types are important for you, you have to manually set 
them when reading a FITS ASCII table (for example with the Table program when 
reading/converting into a file, or with the @file{gnuastro/table.h} library 
functions when reading into memory).
+Conversely if integer types are important for you, you have to manually set 
them when reading a FITS ASCII table (for example, with the Table program when 
reading/converting into a file, or with the @file{gnuastro/table.h} library 
functions when reading into memory).
 
 
 @node Selecting table columns,  , Gnuastro text table format, Tables
@@ -11338,7 +11339,7 @@ Conversely if integer types are important for you, you 
have to manually set them
 
 At the lowest level, the only defining aspect of a column in a table is its 
number, or position.
 But selecting columns purely by number is not very convenient and, especially 
when the tables are large it can be very frustrating and prone to errors.
-Hence, table file formats (for example see @ref{Recognized table formats}) 
have ways to store additional information about the columns (meta-data).
+Hence, table file formats (for example, see @ref{Recognized table formats}) 
have ways to store additional information about the columns (meta-data).
 Some of the most common pieces of information about each column are its 
@emph{name}, the @emph{units} of data in it, and a @emph{comment} for 
longer/informal description of the column's data.
 
 To facilitate research with Gnuastro, you can select columns by matching, or 
searching in these three fields, besides the low-level column number.
@@ -11348,7 +11349,7 @@ To view the full list of information on the columns in 
the table, you can use th
 $ asttable --information table-file
 @end example
 
-Gnuastro's programs need the columns for different purposes, for example in 
Crop, you specify the columns containing the central coordinates of the crop 
centers with the @option{--coordcol} option (see @ref{Crop options}).
+Gnuastro's programs need the columns for different purposes, for example, in 
Crop, you specify the columns containing the central coordinates of the crop 
centers with the @option{--coordcol} option (see @ref{Crop options}).
 On the other hand, in MakeProfiles, to specify the column containing the 
profile position angles, you must use the @option{--pcol} option (see 
@ref{MakeProfiles catalog}).
 Thus, there can be no unified common option name to select columns for all 
programs (different columns have different purposes).
 However, when the program expects a column for a specific context, the option 
names end in the @option{col} suffix like the examples above.
@@ -11363,7 +11364,7 @@ The matching will be done following this convention:
 
 @itemize
 @item
-If the value is enclosed in two slashes (for example @command{-x/RA_/}, or 
@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a 
regular expression with the same convention as GNU AWK.
+If the value is enclosed in two slashes (for example, @command{-x/RA_/}, or 
@option{--coordcol=/RA_/}, see @ref{Crop options}), then it is assumed to be a 
regular expression with the same convention as GNU AWK.
 GNU AWK has a very well written 
@url{https://www.gnu.org/software/gawk/manual/html_node/Regexp.html, chapter} 
describing regular expressions, so we will not continue discussing them here.
 Regular expressions are a very powerful tool in matching text and useful in 
many contexts.
 We thus strongly encourage reviewing this chapter for greatly improving the 
quality of your work in many cases, not just for searching column meta-data in 
Gnuastro.
@@ -11383,29 +11384,29 @@ In this case, the order of the selected columns (with 
one call) will be the same
 @node Tessellation, Automatic output, Tables, Common program behavior
 @section Tessellation
 
-It is sometimes necessary to classify the elements in a dataset (for example 
pixels in an image) into a grid of individual, non-overlapping tiles.
-For example when background sky gradients are present in an image, you can 
define a tile grid over the image.
+It is sometimes necessary to classify the elements in a dataset (for example, 
pixels in an image) into a grid of individual, non-overlapping tiles.
+for example, when background sky gradients are present in an image, you can 
define a tile grid over the image.
 When the tile sizes are set properly, the background's variation over each 
tile will be negligible, allowing you to measure (and subtract) it.
-In other cases (for example spatial domain convolution in Gnuastro, see 
@ref{Convolve}), it might simply be for speed of processing: each tile can be 
processed independently on a separate CPU thread.
+In other cases (for example, spatial domain convolution in Gnuastro, see 
@ref{Convolve}), it might simply be for speed of processing: each tile can be 
processed independently on a separate CPU thread.
 In the arts and mathematics, this process is formally known as 
@url{https://en.wikipedia.org/wiki/Tessellation, tessellation}.
 
 The size of the regular tiles (in units of data-elements, or pixels in an 
image) can be defined with the @option{--tilesize} option.
 It takes multiple numbers (separated by a comma) which will be the length 
along the respective dimension (in FORTRAN/FITS dimension order).
 Divisions are also acceptable, but must result in an integer.
-For example @option{--tilesize=30,40} can be used for an image (a 2D dataset).
+for example, @option{--tilesize=30,40} can be used for an image (a 2D dataset).
 The regular tile size along the first FITS axis (horizontal when viewed in SAO 
DS9) will be 30 pixels and along the second it will be 40 pixels.
 Ideally, @option{--tilesize} should be selected such that all tiles in the 
image have exactly the same size.
 In other words, that the dataset length in each dimension is divisible by the 
tile size in that dimension.
 
 However, this is not always possible: the dataset can be any size and every 
pixel in it is valuable.
-In such cases, Gnuastro will look at the significance of the remainder length, 
if it is not significant (for example one or two pixels), then it will just 
increase the size of the first tile in the respective dimension and allow the 
rest of the tiles to have the required size.
-When the remainder is significant (for example one pixel less than the size 
along that dimension), the remainder will be added to one regular tile's size 
and the large tile will be cut in half and put in the two ends of the 
grid/tessellation.
+In such cases, Gnuastro will look at the significance of the remainder length, 
if it is not significant (for example, one or two pixels), then it will just 
increase the size of the first tile in the respective dimension and allow the 
rest of the tiles to have the required size.
+When the remainder is significant (for example, one pixel less than the size 
along that dimension), the remainder will be added to one regular tile's size 
and the large tile will be cut in half and put in the two ends of the 
grid/tessellation.
 In this way, all the tiles in the central regions of the dataset will have the 
regular tile sizes and the tiles on the edge will be slightly larger/smaller 
depending on the remainder significance.
 The fraction which defines the remainder significance along all dimensions can 
be set through @option{--remainderfrac}.
 
 The best tile size is directly related to the spatial properties of the 
property you want to study (for example, gradient on the image).
 In practice we assume that the gradient is not present over each tile.
-So if there is a strong gradient (for example in long wavelength ground based 
images) or the image is of a crowded area where there is not too much blank 
area, you have to choose a smaller tile size.
+So if there is a strong gradient (for example, in long wavelength ground based 
images) or the image is of a crowded area where there is not too much blank 
area, you have to choose a smaller tile size.
 A larger mesh will give more pixels and so the scatter in the results will be 
less (better statistics).
 
 @cindex CCD
@@ -11417,15 +11418,15 @@ A larger mesh will give more pixels and so the 
scatter in the results will be le
 For raw image processing, a single tessellation/grid is not sufficient.
 Raw images are the unprocessed outputs of the camera detectors.
 Modern detectors usually have multiple readout channels each with its own 
amplifier.
-For example the Hubble Space Telescope Advanced Camera for Surveys (ACS) has 
four amplifiers over its full detector area dividing the square field of view 
to four smaller squares.
-Ground based image detectors are not exempt, for example each CCD of Subaru 
Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, 
but they have the same height of the CCD and divide the width by four parts.
+for example, the Hubble Space Telescope Advanced Camera for Surveys (ACS) has 
four amplifiers over its full detector area dividing the square field of view 
to four smaller squares.
+Ground based image detectors are not exempt, for example, each CCD of Subaru 
Telescope's Hyper Suprime-Cam camera (which has 104 CCDs) has four amplifiers, 
but they have the same height of the CCD and divide the width by four parts.
 
 @cindex Channel
 The bias current on each amplifier is different, and initial bias subtraction 
is not perfect.
 So even after subtracting the measured bias current, you can usually still 
identify the boundaries of different amplifiers by eye.
 See Figure 11(a) in Akhlaghi and Ichikawa (2015) for an example.
 This results in the final reduced data to have non-uniform amplifier-shaped 
regions with higher or lower background flux values.
-Such systematic biases will then propagate to all subsequent measurements we 
do on the data (for example photometry and subsequent stellar mass and star 
formation rate measurements in the case of galaxies).
+Such systematic biases will then propagate to all subsequent measurements we 
do on the data (for example, photometry and subsequent stellar mass and star 
formation rate measurements in the case of galaxies).
 
 Therefore an accurate analysis requires a two layer tessellation: the top 
layer contains larger tiles, each covering one amplifier channel.
 For clarity we will call these larger tiles ``channels''.
@@ -11453,8 +11454,8 @@ For the full list of processing-related common options 
(including tessellation o
 @cindex Setting output file names automatically
 All the programs in Gnuastro are designed such that specifying an output file 
or directory (based on the program context) is optional.
 When no output name is explicitly given (with @option{--output}, see 
@ref{Input output options}), the programs will automatically set an output name 
based on the input name(s) and what the program does.
-For example when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and do not specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
-When the input is from the standard input (for example a pipe, see 
@ref{Standard input}), and @option{--output} is not given, the output name will 
be the program's name (for example @file{converttype.jpg}).
+for example, when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and do not specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
+When the input is from the standard input (for example, a pipe, see 
@ref{Standard input}), and @option{--output} is not given, the output name will 
be the program's name (for example, @file{converttype.jpg}).
 
 @vindex --keepinputdir
 Another very important part of the automatic output generation is that all the 
directory information of the input file name is stripped off of it.
@@ -11500,7 +11501,7 @@ For more on this standard, please see @ref{Fits}.
 
 As a community convention described in @ref{Fits}, the first extension of all 
FITS files produced by Gnuastro's programs only contains the meta-data that is 
intended for the file's extension(s).
 For a Gnuastro program, this generic meta-data (that is stored as FITS keyword 
records) is its configuration when it produced this dataset: file name(s) of 
input(s) and option names, values and comments.
-Note that when the configuration is too trivial (only input filename, for 
example the program @ref{Table}) no meta-data is written in this extension.
+Note that when the configuration is too trivial (only input filename, for 
example, the program @ref{Table}) no meta-data is written in this extension.
 
 FITS keywords have the following limitations in regards to generic option 
names and values which are described below:
 
@@ -11596,7 +11597,7 @@ END
 @cindex Decimal separator
 @cindex Language of command-line
 If your @url{https://en.wikipedia.org/wiki/Locale_(computer_software), system 
locale} is not English, it may happen that the `.' is not used as the decimal 
separator of basic command-line tools for input or output.
-For example in Spanish and some other languages the decimal separator (symbol 
used to separate the integer and fractional part of a number), is a comma.
+for example, in Spanish and some other languages the decimal separator (symbol 
used to separate the integer and fractional part of a number), is a comma.
 Therefore in such systems, some programs may print @mymath{0.5} as as 
`@code{0,5}' (instead of `@code{0.5}').
 This mainly happens in some core operating system tools like @command{awk} or 
@command{seq} depend on the locale.
 This can cause problems for other programs (like those in Gnuastro that expect 
a `@key{.}' as the decimal separator).
@@ -11625,7 +11626,7 @@ For a more complete explanation on each variable, see 
@url{https://www.baeldung.
 $ locale
 @end example
 
-To avoid these kinds of locale-specific problems (for example another program 
not being able to read `@code{0,5}' as half of unity), you can change the 
locale by giving the value of @code{C} to the @code{LC_NUMERIC} environment 
variable (or the lower-level/generic @code{LC_ALL}).
+To avoid these kinds of locale-specific problems (for example, another program 
not being able to read `@code{0,5}' as half of unity), you can change the 
locale by giving the value of @code{C} to the @code{LC_NUMERIC} environment 
variable (or the lower-level/generic @code{LC_ALL}).
 You will notice that @code{C} is not a human-language and country identifier 
like @code{en_US}, it is the programming locale, which is well recognized by 
programmers in all countries and is available on all Unix-like operating 
systems (others may not be pre-defined and may need installation).
 You can set the @code{LC_NUMERIC} only for a single command (the first one 
below: simply defining the variable in the same line), or all commands within 
the running session (the second command below, or ``exporting'' it to all 
subsequent commands):
 
@@ -11663,12 +11664,12 @@ To process, archive and transmit the data, you need a 
container to store it firs
 From the start of the computer age, different formats have been defined to 
store data, optimized for particular applications.
 One format/container can never be useful for all applications: the storage 
defines the application and vice-versa.
 In astronomy, the Flexible Image Transport System (FITS) standard has become 
the most common format of data storage and transmission.
-It has many useful features, for example multiple sub-containers (also known 
as extensions or header data units, HDUs) within one file, or support for 
tables as well as images.
+It has many useful features, for example, multiple sub-containers (also known 
as extensions or header data units, HDUs) within one file, or support for 
tables as well as images.
 Each HDU can store an independent dataset and its corresponding meta-data.
 Therefore, Gnuastro has one program (see @ref{Fits}) specifically designed to 
manipulate FITS HDUs and the meta-data (header keywords) in each HDU.
 
 Your astronomical research does not just involve data analysis (where the FITS 
format is very useful).
-For example you want to demonstrate your raw and processed FITS images or 
spectra as figures within slides, reports, or papers.
+for example, you want to demonstrate your raw and processed FITS images or 
spectra as figures within slides, reports, or papers.
 The FITS format is not defined for such applications.
 Thus, Gnuastro also comes with the ConvertType program (see @ref{ConvertType}) 
which can be used to convert a FITS image to and from (where possible) other 
formats like plain text and JPEG (which allow two way conversion), along with 
EPS and PDF (which can only be created from FITS, not the other way round).
 
@@ -11706,11 +11707,11 @@ The FITS 2.0 and 3.0 standards were approved in 2000 
and 2008 respectively, and
 Also see the @url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 
standard paper} for a nice introduction and history along with the full 
standard.
 
 @cindex Meta-data
-Many common image formats, for example a JPEG, only have one image/dataset per 
file, however one great advantage of the FITS standard is that it allows you to 
keep multiple datasets (images or tables along with their separate meta-data) 
in one file.
+Many common image formats, for example, a JPEG, only have one image/dataset 
per file, however one great advantage of the FITS standard is that it allows 
you to keep multiple datasets (images or tables along with their separate 
meta-data) in one file.
 In the FITS standard, each data + metadata is known as an extension, or more 
formally a header data unit or HDU.
 The HDUs in a file can be completely independent: you can have multiple images 
of different dimensions/sizes or tables as separate extensions in one file.
 However, while the standard does not impose any constraints on the relation 
between the datasets, it is strongly encouraged to group data that are 
contextually related with each other in one file.
-For example an image and the table/catalog of objects and their measured 
properties in that image.
+for example, an image and the table/catalog of objects and their measured 
properties in that image.
 Other examples can be images of one patch of sky in different colors 
(filters), or one raw telescope image along with its calibration data (tables 
or images).
 
 As discussed above, the extensions in a FITS file can be completely 
independent.
@@ -11720,13 +11721,13 @@ Subsequent extensions may contain data along with 
their own separate meta-data.
 All of Gnuastro's programs also follow this convention: the main output 
dataset(s) are placed in the second (or later) extension(s).
 The first extension contains no data the program's configuration (input file 
name, along with all its option values) are stored as its meta-data, see 
@ref{Output FITS files}.
 
-The meta-data contain information about the data, for example which region of 
the sky an image corresponds to, the units of the data, what telescope, camera, 
and filter the data were taken with, it observation date, or the software that 
produced it and its configuration.
+The meta-data contain information about the data, for example, which region of 
the sky an image corresponds to, the units of the data, what telescope, camera, 
and filter the data were taken with, it observation date, or the software that 
produced it and its configuration.
 Without the meta-data, the raw dataset is practically just a collection of 
numbers and really hard to understand, or connect with the real world (other 
datasets).
 It is thus strongly encouraged to supplement your data (at any level of 
processing) with as much meta-data about your processing/science as possible.
 
 The meta-data of a FITS file is in ASCII format, which can be easily viewed or 
edited with a text editor or on the command-line.
 Each meta-data element (known as a keyword generally) is composed of a name, 
value, units and comments (the last two are optional).
-For example below you can see three FITS meta-data keywords for specifying the 
world coordinate system (WCS, or its location in the sky) of a dataset:
+for example, below you can see three FITS meta-data keywords for specifying 
the world coordinate system (WCS, or its location in the sky) of a dataset:
 
 @example
 LATPOLE =           -27.805089 / [deg] Native latitude of celestial pole
@@ -11735,11 +11736,11 @@ EQUINOX =               2000.0 / [yr] Equinox of 
equatorial coordinates
 @end example
 
 However, there are some limitations which discourage viewing/editing the 
keywords with text editors.
-For example there is a fixed length of 80 characters for each keyword (its 
name, value, units and comments) and there are no new-line characters, so on a 
text editor all the keywords are seen in one line.
+for example, there is a fixed length of 80 characters for each keyword (its 
name, value, units and comments) and there are no new-line characters, so on a 
text editor all the keywords are seen in one line.
 Also, the meta-data keywords are immediately followed by the data which are 
commonly in binary format and will show up as strange looking characters on a 
text editor, and significantly slowing down the processor.
 
 Gnuastro's Fits program was designed to allow easy manipulation of FITS 
extensions and meta-data keywords on the command-line while conforming fully 
with the FITS standard.
-For example you can copy or cut (copy and remove) HDUs/extensions from one 
FITS file to another, or completely delete them.
+for example, you can copy or cut (copy and remove) HDUs/extensions from one 
FITS file to another, or completely delete them.
 It also has features to delete, add, or edit meta-data keywords within one HDU.
 
 @menu
@@ -11796,7 +11797,7 @@ You can use this to get a general idea of the contents 
of the FITS file and what
 Here is one example of information about a FITS file with four extensions: the 
first extension has no data, it is a purely meta-data HDU (commonly used to 
keep meta-data about the whole file, or grouping of extensions, see @ref{Fits}).
 The second extension is an image with name @code{IMAGE} and single precision 
floating point type (@code{float32}, see @ref{Numeric data types}), it has 4287 
pixels along its first (horizontal) axis and 4286 pixels along its second 
(vertical) axis.
 The third extension is also an image with name @code{MASK}.
-It is in 2-byte integer format (@code{int16}) which is commonly used to keep 
information about pixels (for example to identify which ones were saturated, or 
which ones had cosmic rays and so on), note how it has the same size as the 
@code{IMAGE} extension.
+It is in 2-byte integer format (@code{int16}) which is commonly used to keep 
information about pixels (for example, to identify which ones were saturated, 
or which ones had cosmic rays and so on), note how it has the same size as the 
@code{IMAGE} extension.
 The third extension is a binary table called @code{CATALOG} which has 12371 
rows and 5 columns (it probably contains information about the sources in the 
image).
 
 @example
@@ -11850,7 +11851,7 @@ Thus they cannot be called together in one command 
(each has its own independent
 @itemx --numhdus
 Print the number of extensions/HDUs in the given file.
 Note that this option must be called alone and will only print a single number.
-It is thus useful in scripts, for example when you need to do check the number 
of extensions in a FITS file.
+It is thus useful in scripts, for example, when you need to do check the 
number of extensions in a FITS file.
 
 For a complete list of basic meta-data on the extensions in a FITS file, do 
not use any of the options in this section or in @ref{Keyword inspection and 
manipulation}.
 For more, see @ref{Invoking astfits}.
@@ -11907,10 +11908,10 @@ Print the rectangular area (or 3D cube) covered by 
the given image/datacube HDU
 The covered area is reported in two ways:
 1) the center and full width in each dimension,
 2) the minimum and maximum sky coordinates in each dimension.
-This is option is thus useful when you want to get a general feeling of a new 
image/dataset, or prepare the inputs to query external databases in the region 
of the image (for example with @ref{Query}).
+This is option is thus useful when you want to get a general feeling of a new 
image/dataset, or prepare the inputs to query external databases in the region 
of the image (for example, with @ref{Query}).
 
 If run without the @option{--quiet} option, the values are given with a 
human-friendly description.
-For example here is the output of this option on an image taken near the star 
Castor:
+for example, here is the output of this option on an image taken near the star 
Castor:
 
 @example
 $ astfits castor.fits --skycoverage
@@ -11978,12 +11979,12 @@ The first (zero-th) HDU cannot be removed with this 
option.
 Consider using @option{--copy} or @option{--cut} in combination with 
@option{primaryimghdu} to not have an empty zero-th HDU.
 From CFITSIO: ``In the case of deleting the primary array (the first HDU in 
the file) then [it] will be replaced by a null primary array containing the 
minimum set of required keywords and no data.''.
 So in practice, any existing data (array) and meta-data in the first extension 
will be removed, but the number of extensions in the file will not change.
-This is because of the unique position the first FITS extension has in the 
FITS standard (for example it cannot be used to store tables).
+This is because of the unique position the first FITS extension has in the 
FITS standard (for example, it cannot be used to store tables).
 
 @item --primaryimghdu
 Copy or cut an image HDU to the zero-th HDU/extension a file that does not yet 
exist.
 This option is thus irrelevant if the output file already exists or the 
copied/cut extension is a FITS table.
-For example with the commands below, first we make sure that @file{out.fits} 
does not exist, then we copy the first extension of @file{in.fits} to the 
zero-th extension of @file{out.fits}.
+for example, with the commands below, first we make sure that @file{out.fits} 
does not exist, then we copy the first extension of @file{in.fits} to the 
zero-th extension of @file{out.fits}.
 
 @example
 $ rm -f out.fits
@@ -12046,9 +12047,9 @@ image-c.fits 2      387    336
 @end example
 
 Another advantage of a table output is that you can directly write the table 
to a file.
-For example if you add @option{--output=fileinfo.fits}, the information above 
will be printed into a FITS table.
+for example, if you add @option{--output=fileinfo.fits}, the information above 
will be printed into a FITS table.
 You can also pipe it into @ref{Table} to select files based on certain 
properties, to sort them based on another property, or any other operation that 
can be done with Table (including @ref{Column arithmetic}).
-For example with the command below, you can select all the files that have a 
size larger than 500 pixels in both dimensions.
+for example, with the command below, you can select all the files that have a 
size larger than 500 pixels in both dimensions.
 
 @example
 $ astfits *.fits --keyvalue=NAXIS,NAXIS1,NAXIS2 \
@@ -12079,7 +12080,7 @@ You can read this option as 
column-information-in-standard-output.
 
 Below we will discuss the options that can be used to manipulate keywords.
 To see the full list of keywords in a FITS HDU, you can use the 
@option{--printallkeys} option.
-If any of the keyword modification options below are requested (for example 
@option{--update}), the headers of the input file/HDU will be changed first, 
then printed.
+If any of the keyword modification options below are requested (for example, 
@option{--update}), the headers of the input file/HDU will be changed first, 
then printed.
 Keyword modification is done within the input file.
 Therefore, if you want to keep the original FITS file or HDU intact, it is 
easiest to create a copy of the file/HDU first and then run Fits on that (for 
copying a HDU to another file, see @ref{HDU information and manipulation}.
 In the FITS standard, keywords are always uppercase.
@@ -12092,7 +12093,7 @@ Therefore, if a @code{CHECKSUM} is present in the HDU, 
after all the keyword mod
 @end cartouche
 
 Most of the options can accept multiple instances in one command.
-For example you can add multiple keywords to delete by calling 
@option{--delete} multiple times, since repeated keywords are allowed, you can 
even delete the same keyword multiple times.
+for example, you can add multiple keywords to delete by calling 
@option{--delete} multiple times, since repeated keywords are allowed, you can 
even delete the same keyword multiple times.
 The action of such options will start from the top most keyword.
 
 The precedence of operations are described below.
@@ -12131,7 +12132,7 @@ If @option{--quitonerror} is called, then Fits will 
immediately stop upon the fi
 @cindex GNU Grep
 If you want to inspect only a certain set of header keywords, it is easiest to 
pipe the output of the Fits program to GNU Grep.
 Grep is a very powerful and advanced tool to search strings which is precisely 
made for such situations.
-For example if you only want to check the size of an image FITS HDU, you can 
run:
+for example, if you only want to check the size of an image FITS HDU, you can 
run:
 
 @example
 $ astfits input.fits | grep NAXIS
@@ -12140,8 +12141,8 @@ $ astfits input.fits | grep NAXIS
 @cartouche
 @noindent
 @strong{FITS STANDARD KEYWORDS:}
-Some header keywords are necessary for later operations on a FITS file, for 
example BITPIX or NAXIS, see the FITS standard for their full list.
-If you modify (for example remove or rename) such keywords, the FITS file 
extension might not be usable any more.
+Some header keywords are necessary for later operations on a FITS file, for 
example, BITPIX or NAXIS, see the FITS standard for their full list.
+If you modify (for example, remove or rename) such keywords, the FITS file 
extension might not be usable any more.
 Also be careful for the world coordinate system keywords, if you modify or 
change their values, any future world coordinate system (like RA and Dec) 
measurements on the image will also change.
 @end cartouche
 
@@ -12160,7 +12161,7 @@ To stop as soon as an error occurs, run with 
@option{--quitonerror}.
 
 @item -r STR,STR
 @itemx --rename=STR,STR
-Rename a keyword to a new value (for example @option{--rename=OLDNAME,NEWNAME}.
+Rename a keyword to a new value (for example, 
@option{--rename=OLDNAME,NEWNAME}.
 @option{STR} contains both the existing and new names, which should be 
separated by either a comma (@key{,}) or a space character.
 Note that if you use a space character, you have to put the value to this 
option within double quotation marks (@key{"}) so the space character is not 
interpreted as an option separator.
 Multiple instances of @option{--rename} can be given in one command.
@@ -12210,7 +12211,7 @@ The special names (first string) below will cause a 
special behavior:
 Write a ``title'' to the list of keywords.
 A title consists of one blank line and another which is blank for several 
spaces and starts with a slash (@key{/}).
 The second string given to this option is the ``title'' or string printed 
after the slash.
-For example with the command below you can add a ``title'' of `My keywords' 
after the existing keywords and add the subsequent @code{K1} and @code{K2} 
keywords under it (note that keyword names are not case sensitive).
+for example, with the command below you can add a ``title'' of `My keywords' 
after the existing keywords and add the subsequent @code{K1} and @code{K2} 
keywords under it (note that keyword names are not case sensitive).
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords" \
@@ -12232,7 +12233,7 @@ The reason you need to use @key{/} as the keyword name 
for setting a title is th
 
 The title(s) is(are) written into the FITS with the same order that 
@option{--write} is called.
 Therefore in one run of the Fits program, you can specify many different 
titles (with their own keywords under them).
-For example the command below that builds on the previous example and adds 
another group of keywords named @code{A1} and @code{A2}.
+for example, the command below that builds on the previous example and adds 
another group of keywords named @code{A1} and @code{A2}.
 
 @example
 $ astfits test.fits -h1 --write=/,"My keywords"   \
@@ -12415,14 +12416,14 @@ By default if an error occurs, Fits will warn the 
user of the faulty keyword and
 @cindex Epoch, Unix time
 Interpret the value of the given keyword in the FITS date format (most 
generally: @code{YYYY-MM-DDThh:mm:ss.ddd...}) and return the corresponding Unix 
epoch time (number of seconds that have passed since 00:00:00 Thursday, January 
1st, 1970).
 The @code{Thh:mm:ss.ddd...} section (specifying the time of day), and also the 
@code{.ddd...} (specifying the fraction of a second) are optional.
-The value to this option must be the FITS keyword name that contains the 
requested date, for example @option{--datetosec=DATE-OBS}.
+The value to this option must be the FITS keyword name that contains the 
requested date, for example, @option{--datetosec=DATE-OBS}.
 
 @cindex GNU C Library
 This option can also interpret the older FITS date format 
(@code{DD/MM/YYThh:mm:ss.ddd...}) where only two characters are given to the 
year.
 In this case (following the GNU C Library), this option will make the 
following assumption: values 68 to 99 correspond to the years 1969 to 1999, and 
values 0 to 68 as the years 2000 to 2068.
 
-This is a very useful option for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
-The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
+This is a very useful option for operations on the FITS date values, for 
example, sorting FITS files by their dates, or finding the time difference 
between two FITS files.
+The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example, the number of days in different 
months, or leap years, etc).
 
 @item --wcscoordsys=STR
 @cindex Galactic coordinate system
@@ -12435,7 +12436,7 @@ The advantage of working with the Unix epoch time is 
that you do not have to wor
 @cindex Coordinate system: Supergalactic
 Convert the coordinate system of the image's world coordinate system (WCS) to 
the given coordinate system (@code{STR}) and write it into the file given to 
@option{--output} (or an automatically named file if no @option{--output} has 
been given).
 
-For example with the command below, @file{img-eq.fits} will have an identical 
dataset (pixel values) as @file{image.fits}.
+for example, with the command below, @file{img-eq.fits} will have an identical 
dataset (pixel values) as @file{image.fits}.
 However, the WCS coordinate system of @file{img-eq.fits} will be the 
equatorial coordinate system in the Julian calendar epoch 2000 (which is the 
most common epoch used today).
 Fits will automatically extract the current coordinate system of 
@file{image.fits} and as long as it is one of the recognized coordinate systems 
listed below, it will do the conversion.
 
@@ -12468,7 +12469,7 @@ For more on their difference and links for further 
reading about epochs in astro
 @cindex Distortion, WCS
 @cindex SIP WCS distortion
 @cindex TPV WCS distortion
-If the argument has a WCS distortion, the output (file given with the 
@option{--output} option) will have the distortion given to this option (for 
example @code{SIP}, @code{TPV}).
+If the argument has a WCS distortion, the output (file given with the 
@option{--output} option) will have the distortion given to this option (for 
example, @code{SIP}, @code{TPV}).
 The output will be a new file (with a copy of the image, and the new WCS), so 
if it already exists, the file will be delete (unless you use the 
@code{--dontdelete} option, see @ref{Input output options}).
 
 With this option, the FITS program will read the minimal set of keywords from 
the input HDU and the HDU data, it will then write them into the file given to 
the @option{--output} option but with a newly created set of WCS-related 
keywords corresponding to the desired distortion standard.
@@ -12480,7 +12481,7 @@ Note that all possible conversions between all 
standards are not yet supported.
 If the requested conversion is not supported, an informative error message 
will be printed.
 If this happens, please let us know and we will try our best to add the 
respective conversions.
 
-For example with the command below, you can be sure that if @file{in.fits} has 
a distortion in its WCS, the distortion of @file{out.fits} will be in the SIP 
standard.
+for example, with the command below, you can be sure that if @file{in.fits} 
has a distortion in its WCS, the distortion of @file{out.fits} will be in the 
SIP standard.
 
 @example
 $ astfits in.fits --wcsdistortion=SIP --output=out.fits
@@ -12528,7 +12529,7 @@ Basically, other than EPS/PDF, you can use any of the 
recognized formats as diff
 
 Before explaining the options and arguments (in @ref{Invoking astconvertt}), 
we will start with a short discussion on the difference between raster and 
vector graphics in @ref{Raster and Vector graphics}.
 In ConvertType, vector graphics are used to add markers over your originally 
rasterized data, producing high quality images, ready to be used in your 
exciting papers.
-We Will continue with a description of the recognized files types in 
@ref{Recognized file formats}, followed a short introduction to digital color 
in @ref{Color}.
+We will continue with a description of the recognized files types in 
@ref{Recognized file formats}, followed a short introduction to digital color 
in @ref{Color}.
 A tutorial on how to add markers over an image is then given in @ref{Marking 
objects for publication} and we conclude with a @LaTeX{} based solution to add 
coordinates over an image.
 
 @menu
@@ -12545,19 +12546,19 @@ A tutorial on how to add markers over an image is 
then given in @ref{Marking obj
 
 @cindex Raster graphics
 @cindex Graphics (raster)
-Images that are produced by a hardware (for example the camera in your phone, 
or the camera connected to a telescope) provide pixelated data.
+Images that are produced by a hardware (for example, the camera in your phone, 
or the camera connected to a telescope) provide pixelated data.
 Such data are therefore stored in a 
@url{https://en.wikipedia.org/wiki/Raster_graphics, Raster graphics} format 
which has descrete, independent, equally spaced data elements.
-For example this is the format used FITS (see @ref{Fits}), JPEG, TIFF, PNG and 
other image formats.
+for example, this is the format used FITS (see @ref{Fits}), JPEG, TIFF, PNG 
and other image formats.
 
 @cindex Vector graphics
 @cindex Graphics (vector)
-On the other hand, when something is generated by the computer (for example a 
diagram, plot or even adding a cross over a camera image to highlight something 
there), there is no ``observation'' or connection with nature!
+On the other hand, when something is generated by the computer (for example, a 
diagram, plot or even adding a cross over a camera image to highlight something 
there), there is no ``observation'' or connection with nature!
 Everything is abstract!
 For such things, it is much easier to draw a mathematical line (with infinite 
resolution).
 Therefore, no matter how much you zoom-in, it will never get pixelated.
 This is the realm of @url{https://en.wikipedia.org/wiki/Vector_graphics, 
Vector graphics}.
 If you open the Gnuastro manual in 
@url{https://www.gnu.org/software/gnuastro/manual/gnuastro.pdf, PDF format} You 
can see such graphics in the Gnuastro manual,
-For example in @ref{Circles and the complex plane} or @ref{Distance on a 2D 
curved space}.
+for example, in @ref{Circles and the complex plane} or @ref{Distance on a 2D 
curved space}.
 The most common vector graphics format is PDF for document sharing or SVG for 
web-based applications.
 
 The pixels of a raster image can be shown as vector-based squares with 
different shades, so vector graphics can generally also support raster graphics.
@@ -12612,7 +12613,7 @@ However, unlike FITS, it can only store images, it has 
no constructs for tables.
 Also unlike FITS, each `directory' of a TIFF file can have a multi-channel 
(e.g., RGB) image.
 Another (inconvenient) difference with the FITS standard is that keyword names 
are stored as numbers, not human-readable text.
 
-However, outside of astronomy, because of its support of different numeric 
data types, many fields use TIFF images for accurate (for example 16-bit 
integer or floating point for example) imaging data.
+However, outside of astronomy, because of its support of different numeric 
data types, many fields use TIFF images for accurate (for example, 16-bit 
integer or floating point for example) imaging data.
 
 Currently ConvertType can only read TIFF images, if you are interested in
 writing TIFF images, please get in touch with us.
@@ -12624,7 +12625,7 @@ writing TIFF images, please get in touch with us.
 @cindex Encapsulated PostScript
 The Encapsulated PostScript (EPS) format is essentially a one page PostScript 
file which has a specified size.
 Postscript is used to store a full document like this whole Gnuastro book.
-PostScript therefore also includes non-image data, for example lines and texts.
+PostScript therefore also includes non-image data, for example, lines and 
texts.
 It is a fully functional programming language to describe a document.
 A PostScript file is a plain text file that can be edited like any program 
source with any plain-text editor.
 Therefore in ConvertType, EPS is only an output format and cannot be used as 
input.
@@ -12717,7 +12718,7 @@ To print to the standard output, set the output name to 
`@file{stdout}'.
 @cindex Channel (color)
 Color is generally defined after mixing various data ``channels''.
 The values for each channel usually come a filter that is placed in the 
optical path.
-Filters, only allow a certrain window of the spectrum to pass (for example the 
SDSS @emph{r} filter only alows light from about 5500 to 7000 Angstroms).
+Filters, only allow a certrain window of the spectrum to pass (for example, 
the SDSS @emph{r} filter only alows light from about 5500 to 7000 Angstroms).
 In digital monitors or common digital cameras, a different set of filters are 
used: Red, Green and Blue (commonly known as RGB) that are more optimized to 
the eye's perception.
 On the other hand, when printing on paper, standard printers use the cyan, 
magenta, yellow and key (CMYK, key=black) color space.
 
@@ -12755,7 +12756,7 @@ The reason we say ``false-color'' (or pseudo color) is 
that generally, the three
 So the observed color on your monitor does not correspond the physical 
``color'' that you would have seen if you looked at the object by eye.
 Nevertheless, it is good (and sometimes necessary) for visualization (of 
special features).
 
-In ConvertType, you can do this by giving each separate single-channel dataset 
(for example in the FITS image format) as an argument (in the proper order), 
then asking for the output in a format that supports multi-channel datasets 
(for example see the command below, or @ref{ConvertType input and output}).
+In ConvertType, you can do this by giving each separate single-channel dataset 
(for example, in the FITS image format) as an argument (in the proper order), 
then asking for the output in a format that supports multi-channel datasets 
(for example, see the command below, or @ref{ConvertType input and output}).
 
 @example
 $ astconvertt r.fits g.fits b.fits --output=color.jpg
@@ -12795,7 +12796,7 @@ The increased usage of the monitor's 3-channel color 
space is indeed better, but
 Since grayscale is a commonly used mapping of single-valued datasets, we will 
continue with a closer look at how it is stored.
 One way to represent a gray-scale image in different color spaces is to use 
the same proportions of the primary colors in each pixel.
 This is the common way most FITS image viewers work: for each pixel, they fill 
all the channels with the single value.
-While this is necessary for displaying a dataset, there are downsides when 
storing/saving this type of grayscale visualization (for example in a paper).
+While this is necessary for displaying a dataset, there are downsides when 
storing/saving this type of grayscale visualization (for example, in a paper).
 
 @itemize
 
@@ -12825,7 +12826,7 @@ Therefore a Grayscale image and a CMYK image that has 
only the K-channel filled
 @cindex Colors (web)
 When creating vector graphics, ConvertType recognizes the 
@url{https://en.wikipedia.org/wiki/Web_colors#Extended_colors, extended web 
colors} that are the result of merging the colors in the HTML 4.01, CSS 2.0, 
SVG 1.0 and CSS3 standards.
 They are all shown with their standard name in @ref{colornames}.
-The names are not case sensitive so you can use them in any form (for example 
@code{turquoise} is the same as @code{Turquoise} or @code{TURQUOISE}).
+The names are not case sensitive so you can use them in any form (for example, 
@code{turquoise} is the same as @code{Turquoise} or @code{TURQUOISE}).
 
 @cindex 24-bit terminal
 @cindex True color terminal
@@ -12917,7 +12918,7 @@ Examples include:
 @item
 Coordinates (Right Ascension and Declination) on the edges of the image, so 
viewers of your paper or presentation slides can get a physical feeling of the 
field's sky coverage.
 @item
-Thick line that has a fixed tangential size (for example in kilo parsecs) at 
the redshift/distance of interest.
+Thick line that has a fixed tangential size (for example, in kilo parsecs) at 
the redshift/distance of interest.
 @item
 Contours over the image to show radio/X-ray emission, over an optical image 
for example.
 @item
@@ -12988,7 +12989,7 @@ Can the surface brightness limits be changed to better 
show the LSB structure?
 If so, you are free to change the limits above.
 
 We now have the printable PDF representation of the image, but as discussed 
above, it is not enough for a paper.
-We Will add 1) a thick line showing the size of 20 kpc (kilo parsecs) at the 
redshift of the central galaxy, 2) coordinates and 3) a color bar, showing the 
surface brightness level of each grayscale level.
+We will add 1) a thick line showing the size of 20 kpc (kilo parsecs) at the 
redshift of the central galaxy, 2) coordinates and 3) a color bar, showing the 
surface brightness level of each grayscale level.
 
 To get the first job done, we first need to know the redshift of the central 
galaxy.
 To do this, we can use Gnuastro's Query program to look into all the objects 
in NED within this image (only asking for the RA, Dec and redshift columns).
@@ -13062,7 +13063,7 @@ $ printf '\\newcommand@{\\maCropDecMax@}'"@{$v@}\n" >> 
$macros
 @end example
 
 Finally, we also need to pass some other numbers to PGFPlots: 1) the major 
tick distance (in the coordinate axes that will be printed on the edge of the 
image).
-We Will assume 7 ticks for this image.
+We will assume 7 ticks for this image.
 2) The minimum and maximum surface brightness values that we gave to 
ConvertType when making the PDF; PGFPlots will define its color-bar based on 
these two values.
 
 @example
@@ -13376,7 +13377,7 @@ After reading the input file(s) the number of color 
channels in all the inputs w
 For more on pixel color channels, see @ref{Pixel colors}.
 Depending on the format of the input(s), the number of input files can differ.
 
-For example if you plan to build an RGB PDF and your three channels are in the 
first HDU of @file{r.fits}, @file{g.fits} and @file{b.fits}, then you can 
simply call MakeProfiles like this:
+for example, if you plan to build an RGB PDF and your three channels are in 
the first HDU of @file{r.fits}, @file{g.fits} and @file{b.fits}, then you can 
simply call MakeProfiles like this:
 
 @example
 $ astconvertt r.fits g.fits b.fits -g1 --output=rgb.pdf
@@ -13397,7 +13398,7 @@ $ astconvertt image.jpg --output=rgb.pdf
 @end example
 
 @noindent
-If multiple channels are given as input, and the output format does not 
support multiple color channels (for example FITS), ConvertType will put the 
channels in different HDUs, like the example below.
+If multiple channels are given as input, and the output format does not 
support multiple color channels (for example, FITS), ConvertType will put the 
channels in different HDUs, like the example below.
 After running the @command{astfits} command, if your JPEG file was not 
grayscale (single channel), you will see multiple HDUs in @file{channels.fits}.
 
 @example
@@ -13416,7 +13417,7 @@ Finally, if there are four color channels they will be 
cyan, magenta, yellow and
 The value to @option{--output} (or @option{-o}) can be either a full file name 
or just the suffix of the desired output format.
 In the former case (full name), it will be directly used for the output's file 
name.
 In the latter case, the name of the output file will be set based on the 
automatic output guidelines, see @ref{Automatic output}.
-Note that the suffix name can optionally start with a @file{.} (dot), so for 
example @option{--output=.jpg} and @option{--output=jpg} are equivalent.
+Note that the suffix name can optionally start with a @file{.} (dot), so for 
example, @option{--output=.jpg} and @option{--output=jpg} are equivalent.
 See @ref{Recognized file formats}.
 
 The relevant options for input/output formats are described below:
@@ -13427,7 +13428,7 @@ The relevant options for input/output formats are 
described below:
 Input HDU name or counter (counting from 0) for each input FITS file.
 If the same HDU should be used from all the FITS files, you can use the 
@option{--globalhdu} option described below.
 In ConvertType, it is possible to call the HDU option multiple times for the 
different input FITS or TIFF files in the same order that they are called on 
the command-line.
-Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may 
contain multiple color channels (for example when the image is in RGB).
+Note that in the TIFF standard, one `directory' (similar to a FITS HDU) may 
contain multiple color channels (for example, when the image is in RGB).
 
 Except for the fact that multiple calls are possible, this option is identical 
to the common @option{--hdu} in @ref{Input output options}.
 The number of calls to this option cannot be less than the number of input 
FITS or TIFF files, but if there are more, the extra HDUs will be ignored, note 
that they will be read in the order described in @ref{Configuration file 
precedence}.
@@ -13455,7 +13456,7 @@ By default the ASCII85 encoding is used which provides 
a much better compression
 When converted to PDF (or included in @TeX{} or @LaTeX{} which is finally 
saved as a PDF file), an efficient binary encoding is used which is far more 
efficient than both of them.
 The choice of EPS encoding will thus have no effect on the final PDF.
 
-So if you want to transfer your EPS files (for example if you want to submit 
your paper to arXiv or journals in PostScript), their storage might become 
important if you have large images or lots of small ones.
+So if you want to transfer your EPS files (for example, if you want to submit 
your paper to arXiv or journals in PostScript), their storage might become 
important if you have large images or lots of small ones.
 By default ASCII85 encoding is used which offers a much better compression 
ratio (nearly 40 percent) compared to Hexadecimal encoding.
 
 @item -u INT
@@ -13482,7 +13483,7 @@ Astronomical data usually have a very large dynamic 
range (difference between ma
 The color map to visualize a single channel.
 The first value given to this option is the name of the color map, which is 
shown below.
 Some color maps can be configured.
-In this case, the configuration parameters are optionally given as numbers 
following the name of the color map for example see @option{hsv}.
+In this case, the configuration parameters are optionally given as numbers 
following the name of the color map for example, see @option{hsv}.
 The table below contains the usable names of the color maps that are currently 
supported:
 
 @table @option
@@ -13499,7 +13500,7 @@ The full dataset range will be scaled to 0 and 
@mymath{2^8-1=255} to be stored i
 @cindex HSV: Hue Saturation Value
 Hue, Saturation, 
Value@footnote{@url{https://en.wikipedia.org/wiki/HSL_and_HSV}} color map.
 If no values are given after the name (@option{--colormap=hsv}), the dataset 
will be scaled to 0 and 360 for hue covering the full spectrum of colors.
-However, you can limit the range of hue (to show only a special color range) 
by explicitly requesting them after the name (for example 
@option{--colormap=hsv,20,240}).
+However, you can limit the range of hue (to show only a special color range) 
by explicitly requesting them after the name (for example, 
@option{--colormap=hsv,20,240}).
 
 The mapping of a single-channel dataset to HSV is done through the Hue and 
Value elements: Lower dataset elements have lower ``value'' @emph{and} lower 
``hue''.
 This creates darker colors for fainter parts, while also respecting the range 
of colors.
@@ -13529,7 +13530,7 @@ While SLS is good for visualizing on the monitor, 
SLS-inverse is good for printi
 When there are three input channels and the output is in the FITS format, 
interpret the three input channels as red, green and blue channels (RGB) and 
convert them to the hue, saturation, value (HSV) color space.
 
 The currently supported output formats of ConvertType do not have native 
support for HSV.
-Therefore this option is only supported when the output is in FITS format and 
each of the hue, saturation and value arrays can be saved as one FITS extension 
in the output for further analysis (for example to select a certain color).
+Therefore this option is only supported when the output is in FITS format and 
each of the hue, saturation and value arrays can be saved as one FITS extension 
in the output for further analysis (for example, to select a certain color).
 
 @item -c STR
 @itemx --change=STR
@@ -13542,7 +13543,7 @@ The labels in the images will be changed in the same 
order given.
 By default first the pixel values will be converted then the pixel values will 
be truncated (see @option{--fluxlow} and @option{--fluxhigh}).
 
 You can use any number for the values irrespective of your final output, your 
given values are stored and used in the double precision floating point format.
-So for example if your input image has labels from 1 to 20000 and you only 
want to display those with labels 957 and 11342 then you can run ConvertType 
with these options:
+So for example, if your input image has labels from 1 to 20000 and you only 
want to display those with labels 957 and 11342 then you can run ConvertType 
with these options:
 
 @example
 $ astconvertt --change=957:50000,11342:50001 --fluxlow=5e4 \
@@ -13605,7 +13606,7 @@ Note that this behavior is ideal for gray-scale images, 
if you want a color imag
 @node Drawing with vector graphics,  , Pixel visualization, Invoking 
astconvertt
 @subsubsection Drawing with vector graphics
 
-With the options described in this section, you can draw marks over your 
to-be-published images (for example in PDF).
+With the options described in this section, you can draw marks over your 
to-be-published images (for example, in PDF).
 Each mark can be highly customized so they can have different shapes, colors, 
linewidths, text, text size and etc.
 The properties of the marks should be stored in a table that is given to the 
@option{--marks} option described below.
 A fully working demo on adding marks is provided in @ref{Marking objects for 
publication}.
@@ -13637,7 +13638,7 @@ See the description at the start of this section for 
more.
 Unfortunately in the document structuring convention of the PostScript 
language, the ``bounding box'' has to be in units of PostScript points with no 
fractions allowed.
 So the border values only have to be specified in integers.
 To have a final border that is thinner than one PostScript point in your 
document, you can ask for a larger width in ConvertType and then scale down the 
output EPS or PDF file in your document preparation program.
-For example by setting @command{width} in your @command{includegraphics} 
command in @TeX{} or @LaTeX{} to be larger than the value to 
@option{--widthincm}.
+for example, by setting @command{width} in your @command{includegraphics} 
command in @TeX{} or @LaTeX{} to be larger than the value to 
@option{--widthincm}.
 Since it is vector graphics, the changes of size have no effect on the quality 
of your output (pixels do not get different values).
 
 @item --bordercolor=STR
@@ -13667,7 +13668,7 @@ $ astconvertt image.fits --output=image.pdf \
               --markcoords=RA,DEC
 @end example
 
-You can highly customize each mark with different columns in @file{marks.fits} 
using the @option{--mark*} options below (for example using different colors, 
different shapes, different sizes, text and etc on each mark).
+You can highly customize each mark with different columns in @file{marks.fits} 
using the @option{--mark*} options below (for example, using different colors, 
different shapes, different sizes, text and etc on each mark).
 
 @item --markshdu=STR/INT
 The HDU (or extension) name or number of the table containing mark properties 
(file given to @option{--marks}).
@@ -13822,9 +13823,9 @@ iTerm2 is described as a successor for iTerm and works 
on macOS 10.14 (released
 
 @item --marktext=STR/INT
 Column name or number that contains the text that should be printed under the 
mark.
-If the column is numeric, the number will be printed under the mark (for 
example if you want to write teh magnitude or redshift of the object under the 
mark showing it).
+If the column is numeric, the number will be printed under the mark (for 
example, if you want to write teh magnitude or redshift of the object under the 
mark showing it).
 For the precision of writing floating point columns, see 
@option{--marktextprecision}.
-But if the column has a string format (for example the name of the object like 
an NGC1234), you need to define the column as a string column (see 
@ref{Gnuastro text table format}).
+But if the column has a string format (for example, the name of the object 
like an NGC1234), you need to define the column as a string column (see 
@ref{Gnuastro text table format}).
 
 For text with different lengths, set the length in the definition of the 
column to the maximum length of the strings to be printed.
 If there are some rows or marks that don't require text, set the string in 
this column to @option{n/a} (not applicable; the blank value for strings in 
Gnuastro).
@@ -13883,7 +13884,7 @@ If you are not already familiar with the shape of each 
font, please use @option{
 @section Table
 
 Tables are the high-level products of processing on low-leveler data like 
images or spectra.
-For example in Gnuastro, MakeCatalog will process the pixels over an object 
and produce a catalog (or table) with the properties of each object like 
magnitudes, positions and etc (see @ref{MakeCatalog}).
+for example, in Gnuastro, MakeCatalog will process the pixels over an object 
and produce a catalog (or table) with the properties of each object like 
magnitudes, positions and etc (see @ref{MakeCatalog}).
 Each one of these properties is a column in its output catalog (or table) and 
for each input object, we have a row.
 
 When there are only a small number of objects (rows) and not too many 
properties (columns), then a simple plain text file is mainly enough to store, 
transfer, or even use the produced data.
@@ -13935,7 +13936,7 @@ To identify a column you can directly use its name, or 
specify its number (count
 When you are giving a column number, it is necessary to prefix the number with 
a @code{$}, similar to AWK.
 Otherwise the number is not distinguishable from a constant number to use in 
the arithmetic operation.
 
-For example with the command below, the first two columns of @file{table.fits} 
will be printed along with a third column that is the result of multiplying the 
first column with @mymath{10^{10}} (for example to convert wavelength from 
Meters to Angstroms).
+for example, with the command below, the first two columns of 
@file{table.fits} will be printed along with a third column that is the result 
of multiplying the first column with @mymath{10^{10}} (for example, to convert 
wavelength from Meters to Angstroms).
 Note that without the `@key{$}', it is not possible to distinguish between 
``1'' as a column-counter, or ``1'' as a constant number to use in the 
arithmetic operation.
 Also note that because of the significance of @key{$} for the command-line 
environment, the single-quotes are the recommended quoting method (as in an AWK 
expression), not double-quotes (for the significance of using single quotes see 
the box below).
 
@@ -13945,9 +13946,9 @@ $ asttable table.fits -c1,2 -c'arith $1 1e10 x'
 
 @cartouche
 @noindent
-@strong{Single quotes when string contains @key{$}}: On the command-line, or 
in shell-scripts, @key{$} is used to expand variables, for example @code{echo 
$PATH} prints the value (a string of characters) in the variable @code{PATH}, 
it will not simply print @code{$PATH}.
+@strong{Single quotes when string contains @key{$}}: On the command-line, or 
in shell-scripts, @key{$} is used to expand variables, for example, @code{echo 
$PATH} prints the value (a string of characters) in the variable @code{PATH}, 
it will not simply print @code{$PATH}.
 This operation is also permitted within double quotes, so @code{echo "$PATH"} 
will produce the same output.
-This is good when printing values, for example in the command below, 
@code{$PATH} will expand to the value within it.
+This is good when printing values, for example, in the command below, 
@code{$PATH} will expand to the value within it.
 
 @example
 $ echo "My path is: $PATH"
@@ -13977,7 +13978,7 @@ It is also independent of the low-level table 
structure: for the second command,
 
 Column arithmetic changes the values of the data within the column.
 So the old column meta data cannot be used any more.
-By default the output column of the arithmetic operation will be given a 
generic metadata (for example its name will be @code{ARITH_1}, which is hardly 
useful!).
+By default the output column of the arithmetic operation will be given a 
generic metadata (for example, its name will be @code{ARITH_1}, which is hardly 
useful!).
 But meta data are critically important and it is good practice to always have 
short, but descriptive, names for each columns, units and also some comments 
for more explanation.
 To add metadata to a column, you can use the @option{--colmetadata} option 
that is described in @ref{Invoking asttable} and @ref{Operation precedence in 
Table}.
 
@@ -14010,7 +14011,7 @@ Convert the given WCS positions to image/dataset 
coordinates based on the number
 It will output the same number of columns.
 The first popped operand is the last FITS dimension.
 
-For example the two commands below (which have the same output) will produce 5 
columns.
+for example, the two commands below (which have the same output) will produce 
5 columns.
 The first three columns are the input table's ID, RA and Dec columns.
 The fourth and fifth columns will be the pixel positions in @file{image.fits} 
that correspond to each RA and Dec.
 
@@ -14093,7 +14094,7 @@ For more details please see the @option{ra-to-degree} 
operator.
 @cindex Right Ascension
 Convert degrees (a column with a single floating point number) to the Right 
Ascension, RA, string (in the sexagesimal format hours, minutes and seconds, 
written as @code{_h_m_s}).
 The output will be a string column so no further mathematical operations can 
be done on it.
-The output file can be in any format (for example FITS or plain-text).
+The output file can be in any format (for example, FITS or plain-text).
 If it is plain-text, the string column will be written following the standards 
described in @ref{Gnuastro text table format}.
 
 @item degree-to-dec
@@ -14430,7 +14431,7 @@ By default all the columns of the given file will be 
appended, if you only want
 Note that the columns given to @option{--catcolumns} must be present in all 
the given files (if this option is called more than once with more than one 
file).
 
 If the file given to this option is a FITS file, it is necessary to also 
define the corresponding HDU/extension with @option{--catcolumnhdu}.
-Also note that no operation (for example row selection, arithmetic or etc) is 
applied to the table given to this option.
+Also note that no operation (for example, row selection, arithmetic or etc) is 
applied to the table given to this option.
 
 If the appended columns have a name, and their name is already present in the 
table before adding those columns, the column names of each file will be 
appended with a @code{-N}, where @code{N} is a counter starting from 1 for each 
appended table.
 Just note that in the FITS standard (and thus in Gnuastro), column names are 
not case-sensitive.
@@ -14516,7 +14517,7 @@ But when piping to other Gnuastro programs (where 
metadata can be interpreted an
 @itemx --range=STR,FLT:FLT
 Only output rows that have a value within the given range in the @code{STR} 
column (can be a name or counter).
 Note that the range is only inclusive in the lower-limit.
-For example with @code{--range=sn,5:20} the output's columns will only contain 
rows that have a value in the @code{sn} column (not case-sensitive) that is 
greater or equal to 5, and less than 20.
+for example, with @code{--range=sn,5:20} the output's columns will only 
contain rows that have a value in the @code{sn} column (not case-sensitive) 
that is greater or equal to 5, and less than 20.
 Also you can use the comma for separating the values such as this 
@code{--range=sn,5,20}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
@@ -14535,7 +14536,7 @@ The coordinate columns are the given @code{STR1} and 
@code{STR2} columns, they c
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
 Note that the chosen columns does not have to be in the output columns (which 
are specified by the @code{--column} option).
-For example if we want to select rows in the polygon specified in @ref{Dataset 
inspection and cropping}, this option can be used like this (you can remove the 
double quotations and write them all in one line if you remove the white-spaces 
around the colon separating the column vertices):
+for example, if we want to select rows in the polygon specified in 
@ref{Dataset inspection and cropping}, this option can be used like this (you 
can remove the double quotations and write them all in one line if you remove 
the white-spaces around the colon separating the column vertices):
 
 @example
 asttable table.fits --inpolygon=RA,DEC      \
@@ -14567,7 +14568,7 @@ Only output rows that are equal to the given number(s) 
in the given column.
 The first argument is the column identifier (name or number, see 
@ref{Selecting table columns}), after that you can specify any number of values.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
-For example @option{--equal=ID,5,6,8} will only print the rows that have a 
value of 5, 6, or 8 in the @code{ID} column.
+for example, @option{--equal=ID,5,6,8} will only print the rows that have a 
value of 5, 6, or 8 in the @code{ID} column.
 This option can also be called multiple times, so @option{--equal=ID,4,5 
--equal=ID,6,7} has the same effect as @option{--equal=4,5,6,7}.
 
 @cartouche
@@ -14581,7 +14582,7 @@ The @option{--equal} and @option{--notequal} options 
also work when the given co
 In this case the given value to the option will also be parsed as a string, 
not as a number.
 When dealing with string columns, be careful with trailing white space 
characters (the actual value maybe adjusted to the right, left, or center of 
the column's width).
 If you need to account for such white spaces, you can use shell quoting.
-For example @code{--equal=NAME,"  myname "}.
+for example, @code{--equal=NAME,"  myname "}.
 
 @cartouche
 @noindent
@@ -14598,7 +14599,7 @@ $ asttable table.fits --equal=AB,cd\,ef
 @itemx --notequal=STR,INT/FLT,...
 Only output rows that are @emph{not} equal to the given number(s) in the given 
column.
 The first argument is the column identifier (name or number, see 
@ref{Selecting table columns}), after that you can specify any number of values.
-For example @option{--notequal=ID,5,6,8} will only print the rows where the 
@code{ID} column does not have value of 5, 6, or 8.
+for example, @option{--notequal=ID,5,6,8} will only print the rows where the 
@code{ID} column does not have value of 5, 6, or 8.
 This option can also be called multiple times, so @option{--notequal=ID,4,5 
--notequal=ID,6,7} has the same effect as @option{--notequal=4,5,6,7}.
 
 Be very careful if you want to use the non-equality with floating point 
numbers, see the special note under @option{--equal} for more.
@@ -14611,12 +14612,12 @@ Like above, the columns can be specified by their 
name or number (counting from
 This option can be called multiple times, so @option{--noblank=MAG 
--noblank=PHOTOZ} is equivalent to @option{--noblank=MAG,PHOTOZ}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
-For example if @file{table.fits} has blank values (NaN in floating point 
types) in the @code{magnitude} and @code{sn} columns, with 
@code{--noblank=magnitude,sn}, the output will not contain any rows with blank 
values in these two columns.
+for example, if @file{table.fits} has blank values (NaN in floating point 
types) in the @code{magnitude} and @code{sn} columns, with 
@code{--noblank=magnitude,sn}, the output will not contain any rows with blank 
values in these two columns.
 
 If you want @emph{all} columns to be checked, simply set the value to 
@code{_all} (in other words: @option{--noblank=_all}).
 This mode is useful when there are many columns in the table and you want a 
``clean'' output table (with no blank values in any column): entering their 
name or number one-by-one can be buggy and frustrating.
 In this mode, no other column name should be given.
-For example if you give @option{--noblank=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
+for example, if you give @option{--noblank=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
 
 If you want to change column values using @ref{Column arithmetic} (and set 
some to blank, to later remove), or you want to select rows based on columns 
that you have imported from other tables, you should use the 
@option{--noblankend} option described below.
 Also, see @ref{Operation precedence in Table}.
@@ -14638,7 +14639,7 @@ When called with @option{--sort}, rows will be sorted 
in descending order.
 @itemx --head=INT
 Only print the given number of rows from the @emph{top} of the final table.
 Note that this option only affects the @emph{output} table.
-For example if you use @option{--sort}, or @option{--range}, the printed rows 
are the first @emph{after} applying the sort sorting, or selecting a range of 
the full input.
+for example, if you use @option{--sort}, or @option{--range}, the printed rows 
are the first @emph{after} applying the sort sorting, or selecting a range of 
the full input.
 This option cannot be called with @option{--tail}, @option{--rowrange} or 
@option{--rowrandom}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
@@ -14693,7 +14694,7 @@ Like above, the columns can be specified by their name 
or number (counting from
 This option can be called multiple times, so @option{--noblank=MAG 
--noblank=PHOTOZ} is equivalent to @option{--noblank=MAG,PHOTOZ}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
-For example if your final output table (possibly after column arithmetic, or 
adding new columns) has blank values (NaN in floating point types) in the 
@code{magnitude} and @code{sn} columns, with @code{--noblankend=magnitude,sn}, 
the output will not contain any rows with blank values in these two columns.
+for example, if your final output table (possibly after column arithmetic, or 
adding new columns) has blank values (NaN in floating point types) in the 
@code{magnitude} and @code{sn} columns, with @code{--noblankend=magnitude,sn}, 
the output will not contain any rows with blank values in these two columns.
 
 If you want blank values to be removed from the main input table _before_ any 
further processing (like adding columns, sorting or column arithmetic), you 
should use the @option{--noblank} option.
 With the @option{--noblank} option, the column(s) that is(are) given does not 
necessarily have to be in the output (it is just temporarily used for reading 
the inputs and selecting rows, but does not necessarily need to be present in 
the output).
@@ -14702,7 +14703,7 @@ However, the column(s) given to this option should 
exist in the output.
 If you want @emph{all} columns to be checked, simply set the value to 
@code{_all} (in other words: @option{--noblankend=_all}).
 This mode is useful when there are many columns in the table and you want a 
``clean'' output table (with no blank values in any column): entering their 
name or number one-by-one can be buggy and frustrating.
 In this mode, no other column name should be given.
-For example if you give @option{--noblankend=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
+for example, if you give @option{--noblankend=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
 
 This option is applied just before writing the final table (after 
@option{--colmetadata} has finished).
 So in case you changed the column metadata, or added new columns, you can use 
the new names, or the newly defined column numbers.
@@ -14711,7 +14712,7 @@ For the precedence of this operation in relation to 
others, see @ref{Operation p
 @item -m STR/INT,STR[,STR[,STR]]
 @itemx --colmetadata=STR/INT,STR[,STR[,STR]]
 Update the specified column metadata in the output table.
-This option is applied after all other column-related operations are complete, 
for example column arithmetic, or column concatenation.
+This option is applied after all other column-related operations are complete, 
for example, column arithmetic, or column concatenation.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
 The first value (before the first comma) given to this option is the column's 
identifier.
@@ -14723,7 +14724,7 @@ The next (optional) string will be the selected 
column's unit and the third (opt
 If the two optional strings are not given, the original column's units or 
comments will remain unchanged.
 
 If any of the values contains a comma, you should place a `@code{\}' before 
the comma to avoid it getting confused with a delimiter.
-For example see the command below for a column description that contains a 
comma:
+for example, see the command below for a column description that contains a 
comma:
 
 @example
 $ asttable table.fits \
@@ -14749,10 +14750,10 @@ Note the double quotations around the comment string, 
they are necessary to pres
 @end table
 
 If your table is large and generated by a script, you can first do all your 
operations on your table's data and write it into a temporary file (maybe 
called @file{temp.fits}).
-Then, look into that file's metadata (with @command{asttable temp.fits -i}) to 
see the exact column positions and possible names, then add the necessary calls 
to this option to your previous call to @command{asttable}, so it writes proper 
metadata in the same run (for example in a script or Makefile).
+Then, look into that file's metadata (with @command{asttable temp.fits -i}) to 
see the exact column positions and possible names, then add the necessary calls 
to this option to your previous call to @command{asttable}, so it writes proper 
metadata in the same run (for example, in a script or Makefile).
 Recall that when a name is given, this option will update the metadata of the 
first column that matches, so if you have multiple columns with the same name, 
you can call this options multiple times with the same first argument to change 
them all to different names.
 
-Finally, if you already have a FITS table by other means (for example by 
downloading) and you merely want to update the column metadata and leave the 
data intact, it is much more efficient to directly modify the respective FITS 
header keywords with @code{astfits}, using the keyword manipulation features 
described in @ref{Keyword inspection and manipulation}.
+Finally, if you already have a FITS table by other means (for example, by 
downloading) and you merely want to update the column metadata and leave the 
data intact, it is much more efficient to directly modify the respective FITS 
header keywords with @code{astfits}, using the keyword manipulation features 
described in @ref{Keyword inspection and manipulation}.
 @option{--colmetadata} is mainly intended for scenarios where you want to edit 
the data so it will always load the full/partial dataset into memory, then 
write out the resulting datasets with updated/corrected metadata.
 @end table
 
@@ -14785,7 +14786,7 @@ Finally, if you already have a FITS table by other 
means (for example by downloa
 @cindex Astronomical Data Query Language (ADQL)
 There are many astronomical databases available for downloading astronomical 
data.
 Most follow the International Virtual Observatory Alliance (IVOA, 
@url{https://ivoa.net}) standards (and in particular the Table Access Protocol, 
or TAP@footnote{@url{https://ivoa.net/documents/TAP}}).
-With TAP, it is possible to submit your queries via a command-line downloader 
(for example @command{curl}) to only get specific tables, targets (rows in a 
table) or measurements (columns in a table): you do not have to download the 
full table (which can be very large in some cases)!
+With TAP, it is possible to submit your queries via a command-line downloader 
(for example, @command{curl}) to only get specific tables, targets (rows in a 
table) or measurements (columns in a table): you do not have to download the 
full table (which can be very large in some cases)!
 These customizations are done through the Astronomical Data Query Language 
(ADQL@footnote{@url{https://ivoa.net/documents/ADQL}}).
 
 Therefore, if you are sufficiently familiar with TAP and ADQL, you can easily 
custom-download any part of an online dataset.
@@ -14804,7 +14805,7 @@ $ astfits query-output.fits -h0
 
 With the full command used to download the dataset, you only need a minimal 
knowledge of ADQL to do lower-level customizations on your downloaded dataset.
 You can simply copy that command and change the parts of the query string you 
want: ADQL is very powerful!
-For example you can ask the server to do mathematical operations on the 
columns and apply selections after those operations, or combine/match multiple 
datasets and etc.
+for example, you can ask the server to do mathematical operations on the 
columns and apply selections after those operations, or combine/match multiple 
datasets and etc.
 We will try to add high-level interfaces for such capabilities, but generally, 
do not limit yourself to the high-level operations (that cannot cover 
everything!).
 
 @menu
@@ -14817,7 +14818,7 @@ We will try to add high-level interfaces for such 
capabilities, but generally, d
 
 The current list of databases supported by Query are listed at the end of this 
section.
 To get the list of available datasets within each database, you can use the 
@option{--information} option.
-For example with the command below you can get a list of the roughly 100 
datasets that are available within the ESA Gaia server with their description:
+for example, with the command below you can get a list of the roughly 100 
datasets that are available within the ESA Gaia server with their description:
 
 @example
 $ astquery gaia --information
@@ -14827,7 +14828,7 @@ $ astquery gaia --information
 However, other databases like VizieR host many more datasets (tens of 
thousands!).
 Therefore it is very inconvenient to get the @emph{full} information every 
time you want to find your dataset of interest (the full metadata file VizieR 
is more than 20Mb).
 In such cases, you can limit the downloaded and displayed information with the 
@code{--limitinfo} option.
-For example with the first command below, you can get all datasets relating to 
the MUSE (an instrument on the Very Large Telescope), and those that include 
Roland Bacon (Principle Investigator of MUSE) as an author (@code{Bacon, R.}).
+for example, with the first command below, you can get all datasets relating 
to the MUSE (an instrument on the Very Large Telescope), and those that include 
Roland Bacon (Principle Investigator of MUSE) as an author (@code{Bacon, R.}).
 Recall that @option{-i} is the short format of @option{--information}.
 
 @example
@@ -14846,12 +14847,12 @@ $ astquery vizier --dataset=J/A+A/608/A2/udf10 -i
 For very popular datasets of a database, Query provides an easier-to-remember 
short name that you can feed to @option{--dataset}.
 This short name will map to the officially recognized name of the dataset on 
the server.
 In this mode, Query will also set positional columns accordingly.
-For example most VizieR datasets have an @code{RAJ2000} column (the RA and the 
epoch of 2000) so it is the default RA column name for coordinate search (using 
@option{--center} or @option{--overlapwith}).
-However, some datasets do not have this column (for example SDSS DR12).
+for example, most VizieR datasets have an @code{RAJ2000} column (the RA and 
the epoch of 2000) so it is the default RA column name for coordinate search 
(using @option{--center} or @option{--overlapwith}).
+However, some datasets do not have this column (for example, SDSS DR12).
 So when you use the short name and Query knows about this dataset, it will 
internally set the coordinate columns that SDSS DR12 has: @code{RA_ICRS} and 
@code{DEC_ICRS}.
 Recall that you can always change the coordinate columns with @option{--ccol}.
 
-For example in the VizieR and Gaia databases, the recognized name for the 
early data release 3 data is respectively @code{I/350/gaiaedr3} and 
@code{gaiaedr3.gaia_source}.
+for example, in the VizieR and Gaia databases, the recognized name for the 
early data release 3 data is respectively @code{I/350/gaiaedr3} and 
@code{gaiaedr3.gaia_source}.
 These technical names can be hard to remember.
 Therefore Query provides @code{gaiaedr3} (for VizieR) and @code{edr3} (for 
ESA's Gaia) shortcuts which you can give to @option{--dataset} instead.
 They will be directly mapped to the fully recognized name by Query.
@@ -14952,7 +14953,7 @@ grep '^<TR><TD>' ned-extinction.xml \
  | asttable -oned-extinction.fits
 @end verbatim
 
-Once the table is in FITS, you can easily get the extinction for a certain 
filter (for example the @code{SDSS r} filter) like the command below:
+Once the table is in FITS, you can easily get the extinction for a certain 
filter (for example, the @code{SDSS r} filter) like the command below:
 
 @example
 asttable ned-extinction.fits --equal=FILTER,"SDSS r" \
@@ -14967,7 +14968,7 @@ asttable ned-extinction.fits --equal=FILTER,"SDSS r" \
 @cindex Database, VizieR
 Vizier (@url{https://vizier.u-strasbg.fr}) is arguably the largest catalog 
database in astronomy: containing more than 20500 catalogs as of mid January 
2021.
 Almost all published catalogs in major projects, and even the tables in many 
papers are archived and accessible here.
-For example VizieR also has a full copy of the Gaia database mentioned below, 
with some additional standardized columns (like RA and Dec in J2000).
+for example, VizieR also has a full copy of the Gaia database mentioned below, 
with some additional standardized columns (like RA and Dec in J2000).
 
 The current implementation of @option{--limitinfo} only looks into the 
description of the datasets, but since VizieR is so large, there is still a lot 
of room for improvement.
 Until then, if @option{--limitinfo} is not sufficient, you can use VizieR's 
own web-based search for your desired dataset: 
@url{http://cdsarc.u-strasbg.fr/viz-bin/cat}
@@ -15114,7 +15115,7 @@ To achieve this, Query will ask the databases to 
provide a FITS table output (fo
 After downloading is complete, the raw downloaded file will be read into 
memory once by Query, and written into the file given to @option{--output}.
 The raw downloaded file will be deleted by default, but can be preserved with 
the @option{--keeprawdownload} option.
 This strategy avoids unnecessary surprises depending on database.
-For example some databases can download a compressed FITS table, even though 
we ask for FITS.
+for example, some databases can download a compressed FITS table, even though 
we ask for FITS.
 But with the strategy above, the final output will be an uncompressed FITS 
file.
 The metadata that is added by Query (including the full download command) is 
also very useful for future usage of the downloaded data.
 Unfortunately many databases do not write the input queries into their 
generated tables.
@@ -15152,7 +15153,7 @@ Note that @emph{this is case-sensitive}.
 This option is only relevant when @option{--information} is also called.
 
 Databases may have thousands (or tens of thousands) of datasets.
-Therefore just the metadata (information) to show with @option{--information} 
can be tens of megabytes (for example the full VizieR metadata file is about 
23Mb as of January 2021).
+Therefore just the metadata (information) to show with @option{--information} 
can be tens of megabytes (for example, the full VizieR metadata file is about 
23Mb as of January 2021).
 Once downloaded, it can also be hard to parse manually.
 With @option{--limitinfo}, only the metadata of datasets that contain this 
string @emph{in their description} will be downloaded and displayed, greatly 
improving the speed of finding your desired dataset.
 
@@ -15174,7 +15175,7 @@ For more on finding and using your desired database, 
see @ref{Available database
 @itemx --column=STR[,STR[,...]]
 The column name(s) to retrieve from the dataset in the given order (not 
compatible with @option{--query}).
 If not given, all the dataset's columns for the selected rows will be queried 
(which can be large!).
-This option can take multiple values in one instance (for example 
@option{--column=ra,dec,mag}), or in multiple instances (for example 
@option{-cra -cdec -cmag}), or mixed (for example @option{-cra,dec -cmag}).
+This option can take multiple values in one instance (for example, 
@option{--column=ra,dec,mag}), or in multiple instances (for example, 
@option{-cra -cdec -cmag}), or mixed (for example, @option{-cra,dec -cmag}).
 
 In case, you do not know the full list of the dataset's column names a-priori, 
and you do not want to download all the columns (which can greatly decrease 
your download speed), you can use the @option{--information} option combined 
with the @option{--dataset} option, see @ref{Available databases}.
 
@@ -15204,7 +15205,7 @@ The region can either be a circle and the point 
(configured with @option{--radiu
 
 @item --ccol=STR,STR
 The name of the coordinate-columns in the dataset to compare with the values 
given to @option{--center}.
-Query will use its internal defaults for each dataset (for example 
@code{RAJ2000} and @code{DEJ2000} for VizieR data).
+Query will use its internal defaults for each dataset (for example, 
@code{RAJ2000} and @code{DEJ2000} for VizieR data).
 But each dataset is treated separately and it is not guaranteed that these 
columns exist in all datasets.
 Also, more than one coordinate system/epoch may be present in a dataset and 
you can use this option to construct your spatial constraint based on the 
others coordinate systems/epochs.
 
@@ -15212,7 +15213,7 @@ Also, more than one coordinate system/epoch may be 
present in a dataset and you
 @itemx --radius=FLT
 The radius about the requested center to use for the automatically generated 
query (not compatible with @option{--query}).
 The radius is in units of degrees, but you can use simple division with this 
option directly on the command-line.
-For example if you want a radius of 20 arc-minutes or 20 arc-seconds, you can 
use @option{--radius=20/60} or @option{--radius=20/3600} respectively (which is 
much more human-friendly than @code{0.3333} or @code{0.005556}).
+for example, if you want a radius of 20 arc-minutes or 20 arc-seconds, you can 
use @option{--radius=20/60} or @option{--radius=20/3600} respectively (which is 
much more human-friendly than @code{0.3333} or @code{0.005556}).
 
 @item -w FLT[,FLT]
 @itemx --width=FLT[,FLT]
@@ -15220,13 +15221,13 @@ The square (or rectangle) side length (width) about 
the requested center to use
 If only one value is given to @code{--width} the region will be a square, but 
if two values are given, the widths of the query box along each dimension will 
be different.
 The value(s) is (are) in the same units as the coordinate column (see 
@option{--ccol}, usually RA and Dec which are degrees).
 You can use simple division for each value directly on the command-line if you 
want relatively small (and more human-friendly) sizes.
-For example if you want your box to be 1 arc-minutes along the RA and 2 
arc-minutes along Dec, you can use @option{--width=1/60,2/60}.
+for example, if you want your box to be 1 arc-minutes along the RA and 2 
arc-minutes along Dec, you can use @option{--width=1/60,2/60}.
 
 @item -g STR,FLT,FLT
 @itemx --range=STR,FLT,FLT
 The column name and numerical range (inclusive) of acceptable values in that 
column (not compatible with @option{--query}).
 This option can be called multiple times for applying range limits on many 
columns in one call (thus greatly reducing the download size).
-For example when used on the ESA gaia database, you can use 
@code{--range=phot_g_mean_mag,10:15} to only get rows that have a value between 
10 and 15 (inclusive on both sides) in the @code{phot_g_mean_mag} column.
+for example, when used on the ESA gaia database, you can use 
@code{--range=phot_g_mean_mag,10:15} to only get rows that have a value between 
10 and 15 (inclusive on both sides) in the @code{phot_g_mean_mag} column.
 
 If you want all rows larger, or smaller, than a certain number, you can use 
@code{inf}, or @code{-inf} as the first or second values respectively.
 For example, if you want objects with SDSS spectroscopic redshifts larger than 
2 (from the VizieR @code{sdss12} database), you can use 
@option{--range=zsp,2,inf}
@@ -15238,11 +15239,11 @@ Then you can edit it to be non-inclusive on your 
desired side.
 @item --noblank=STR[,STR]
 Only ask for rows that do not have a blank value in the @code{STR} column.
 This option can be called many times, and each call can have multiple column 
names (separated by a comma or @key{,}).
-For example if you want the retrieved rows to not have a blank value in 
columns @code{A}, @code{B}, @code{C} and @code{D}, you can use 
@command{--noblank=A -bB,C,D}.
+for example, if you want the retrieved rows to not have a blank value in 
columns @code{A}, @code{B}, @code{C} and @code{D}, you can use 
@command{--noblank=A -bB,C,D}.
 
 @item --sort=STR[,STR]
 Ask for the server to sort the downloaded data based on the given columns.
-For example let's assume your desired catalog has column @code{Z} for redshift 
and column @code{MAG_R} for magnitude in the R band.
+for example, let's assume your desired catalog has column @code{Z} for 
redshift and column @code{MAG_R} for magnitude in the R band.
 When you call @option{--sort=Z,MAG_R}, it will primarily sort the columns 
based on the redshift, but if two objects have the same redshift, they will be 
sorted by magnitude.
 You can add as many columns as you like for higher-level sorting.
 @end table
@@ -15271,7 +15272,7 @@ You can add as many columns as you like for 
higher-level sorting.
 
 Images are one of the major formats of data that is used in astronomy.
 The functions in this chapter explain the GNU Astronomy Utilities which are 
provided for their manipulation.
-For example cropping out a part of a larger image or convolving the image with 
a given kernel or applying a transformation to it.
+for example, cropping out a part of a larger image or convolving the image 
with a given kernel or applying a transformation to it.
 
 @menu
 * Crop::                        Crop region(s) from a dataset.
@@ -15302,7 +15303,7 @@ When more than one crop is required, Crop will divide 
the crops between multiple
 Astronomical surveys are usually extremely large.
 So large in fact, that the whole survey will not fit into a reasonably sized 
file.
 Because of this, surveys usually cut the final image into separate tiles and 
store each tile in a file.
-For example the COSMOS survey's Hubble space telescope, ACS F814W image 
consists of 81 separate FITS images, with each one having a volume of 1.7 Giga 
bytes.
+for example, the COSMOS survey's Hubble space telescope, ACS F814W image 
consists of 81 separate FITS images, with each one having a volume of 1.7 Giga 
bytes.
 
 @cindex Stitch multiple images
 Even though the tile sizes are chosen to be large enough that too many 
galaxies/targets do not fall on the edges of the tiles, inevitably some do.
@@ -15327,11 +15328,11 @@ In order to be comprehensive, intuitive, and easy to 
use, there are two ways to
 @enumerate
 @item
 From its center and side length.
-For example if you already know the coordinates of an object and want to 
inspect it in an image or to generate postage stamps of a catalog containing 
many such coordinates.
+for example, if you already know the coordinates of an object and want to 
inspect it in an image or to generate postage stamps of a catalog containing 
many such coordinates.
 
 @item
 The vertices of the crop region, this can be useful for larger crops over
-many targets, for example to crop out a uniformly deep, or contiguous,
+many targets, for example, to crop out a uniformly deep, or contiguous,
 region of a large survey.
 @end enumerate
 
@@ -15349,7 +15350,7 @@ This can be very convenient when your input 
catalog/coordinates originated from
 
 @table @asis
 @item Image coordinates
-In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and 
widths in units of the input data-elements (for example pixels in an image, not 
world coordinates).
+In image mode (@option{--mode=img}), Crop interprets the pixel coordinates and 
widths in units of the input data-elements (for example, pixels in an image, 
not world coordinates).
 In image mode, only one image may be input.
 The output crop(s) can be defined in multiple ways as listed below.
 
@@ -15453,7 +15454,7 @@ In case you want to count from the top or right sides 
of the image, you can use
 When confronted with a @option{*}, Crop will replace it with the maximum 
length of the image in that dimension.
 So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be 
@math{20\times40} pixels in size and only include the top corner of the input 
image with 3/4 of the image being covered by blank pixels, see @ref{Blank 
pixels}.
 
-If you feel more comfortable with space characters between the values, you can 
use as many space characters as you wish, just be careful to put your value in 
double quotes, for example @command{--section="5:200, 123:854"}.
+If you feel more comfortable with space characters between the values, you can 
use as many space characters as you wish, just be careful to put your value in 
double quotes, for example, @command{--section="5:200, 123:854"}.
 If you forget the quotes, anything after the first space will not be seen by 
@option{--section} and you will most probably get an error because the rest of 
your string will be read as a filename (which most probably does not exist).
 See @ref{Command-line} for a description of how the command-line works.
 
@@ -15463,7 +15464,7 @@ See @ref{Command-line} for a description of how the 
command-line works.
 
 @cindex Blank pixel
 The cropped box can potentially include pixels that are beyond the image range.
-For example when a target in the input catalog was very near the edge of the 
input image.
+for example, when a target in the input catalog was very near the edge of the 
input image.
 The parts of the cropped image that were not in the input image will be filled 
with the following two values depending on the data type of the image.
 In both cases, SAO DS9 will not color code those pixels.
 @itemize
@@ -15522,7 +15523,7 @@ $ astcrop --mode=img --center=568.342,2091.719 
--width=201 image.fits
 
 @noindent
 Crop has one mandatory argument which is the input image name(s), shown above 
with @file{ASTRdata ...}.
-You can use shell expansions, for example @command{*} for this if you have 
lots of images in WCS mode.
+You can use shell expansions, for example, @command{*} for this if you have 
lots of images in WCS mode.
 If the crop box centers are in a catalog, you can use the @option{--catalog} 
option.
 In other cases, you have to provide the single cropped output parameters must 
be given with command-line options.
 See @ref{Crop output} for how the output file name(s) can be specified.
@@ -15580,7 +15581,7 @@ $ astfits image.fits -h1 | cat -n
 For example, distortions have only been present in WCSLIB from version 5.15 
(released in mid 2016).
 Therefore some pipelines still apply their own specific set of WCS keywords 
for distortions and put them into the image header along with those that WCSLIB 
does recognize.
 So now that WCSLIB recognizes most of the standard distortion parameters, they 
will get confused with the old ones and give wrong results.
-For example in the CANDELS-GOODS South images that were created before WCSLIB 
5.15@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
+for example, in the CANDELS-GOODS South images that were created before WCSLIB 
5.15@footnote{@url{https://archive.stsci.edu/pub/hlsp/candels/goods-s/gs-tot/v1.0/}}.
 
 The two @option{--hstartwcs} and @option{--hendwcs} are thus provided so when 
using older datasets, you can specify what region in the FITS headers you want 
to use to read the WCS keywords.
 Note that this is only relevant for reading the WCS information, basic data 
information like the image size are read separately.
@@ -15614,7 +15615,7 @@ If in WCS mode, value(s) given to this option will be 
read in the same units as
 This option may take either a single value (to be used for all dimensions: 
@option{--width=10} in image-mode will crop a @mymath{10\times10} pixel image) 
or multiple values (a specific value for each dimension: @option{--width=10,20} 
in image-mode will crop a @mymath{10\times20} pixel image).
 
 The @code{--width} option also accepts fractions.
-For example if you want the width of your crop to be 3 by 5 arcseconds along 
RA and Dec respectively and you are in wcs-mode, you can use: 
@option{--width=3/3600,5/3600}.
+for example, if you want the width of your crop to be 3 by 5 arcseconds along 
RA and Dec respectively and you are in wcs-mode, you can use: 
@option{--width=3/3600,5/3600}.
 
 The final output will have an odd number of pixels to allow easy 
identification of the pixel which keeps your requested coordinate (from 
@option{--center} or @option{--catalog}).
 If you want an even sided crop, you can run Crop afterwards with 
@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which 
side you do not need), see @ref{Crop section syntax}.
@@ -15626,7 +15627,7 @@ However, for an even-sided crop, it will be very hard 
to identify the central pi
 @item -X
 @itemx --widthinpix
 In WCS mode, interpret the value to @option{--width} as number of pixels, not 
the WCS units like degrees.
-This is useful when you want a fixed crop size in pixels, even though your 
center coordinates are in WCS (for example RA and Dec).
+This is useful when you want a fixed crop size in pixels, even though your 
center coordinates are in WCS (for example, RA and Dec).
 
 @item -l STR
 @itemx -l FLT:FLT,...
@@ -15637,7 +15638,7 @@ Polygon vertice coordinates (when value is in 
@option{FLT,FLT:FLT,FLT:...} forma
 Each vertice can either be in degrees (a single floating point number) or 
sexagesimal (in formats of `@code{_h_m_}' for RA and `@code{_d_m_}' for Dec, or 
simply `@code{_:_:_}' for either of them).
 
 The vertices are used to define the polygon: in the same order given to this 
option.
-When the vertices are not necessarily ordered in the proper order (for example 
one vertice in a square comes after its diagonal opposite), you can add the 
@option{--polygonsort} option which will attempt to sort the vertices before 
cropping.
+When the vertices are not necessarily ordered in the proper order (for 
example, one vertice in a square comes after its diagonal opposite), you can 
add the @option{--polygonsort} option which will attempt to sort the vertices 
before cropping.
 Note that for concave polygons, sorting is not recommended because there is no 
unique solution, for more, see the description under @option{--polygonsort}.
 
 This option can be used both in the image and WCS modes, see @ref{Crop modes}.
@@ -15672,7 +15673,7 @@ $ astcrop --mode=wcs desired-filter-image(s).fits       
    \
 @cindex SAO DS9 region file
 @cindex Region file (SAO DS9)
 More generally, you have an image and want to define the polygon yourself (it 
is not already published like the example above).
-As the number of vertices increases, checking the vertex coordinates on a FITS 
viewer (for example SAO DS9) and typing them in, one by one, can be very 
tedious and prone to typo errors.
+As the number of vertices increases, checking the vertex coordinates on a FITS 
viewer (for example, SAO DS9) and typing them in, one by one, can be very 
tedious and prone to typo errors.
 In such cases, you can make a polygon ``region'' in DS9 and using your mouse, 
easily define (and visually see) it. Given that SAO DS9 has a graphic user 
interface (GUI), if you do not have the polygon vertices before-hand, it is 
much more easier build your polygon there and pass it onto Crop through the 
region file.
 
 You can take the following steps to make an SAO DS9 region file containing 
your polygon.
@@ -15769,7 +15770,7 @@ This check is only applied when the cropped region(s) 
are defined by their cente
 
 The units of the value are interpreted based on the @option{--mode} value (in 
WCS or pixel units).
 The ultimate checked region size (in pixels) will be an odd integer around the 
center (converted from WCS, or when an even number of pixels are given to this 
option).
-In WCS mode, the value can be given as fractions, for example if the WCS units 
are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 
arcseconds.
+In WCS mode, the value can be given as fractions, for example, if the WCS 
units are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 
arcseconds.
 
 Because survey regions do not often have a clean square or rectangle shape, 
some of the pixels on the sides of the survey FITS image do not commonly have 
any data and are blank (see @ref{Blank pixels}).
 So when the catalog was not generated from the input image, it often happens 
that the image does not have data over some of the points.
@@ -15954,15 +15955,15 @@ For more information on how to run Arithmetic, please 
see @ref{Invoking astarith
 
 @cindex Post-fix notation
 @cindex Reverse Polish Notation
-The most common notation for arithmetic operations is the 
@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the 
operator goes between the two operands, for example @mymath{4+5}.
+The most common notation for arithmetic operations is the 
@url{https://en.wikipedia.org/wiki/Infix_notation, infix notation} where the 
operator goes between the two operands, for example, @mymath{4+5}.
 The infix notation is the preferred way in most programming languages which 
come with scripting features for large programs.
 This is because the infix notation requires a way to define precedence when 
more than one operator is involved.
 
-For example consider the statement @code{5 + 6 / 2}.
+for example, consider the statement @code{5 + 6 / 2}.
 Should 6 first be divided by 2, then added by 5?
 Or should 5 first be added with 6, then divided by 2?
 Therefore we need parenthesis to show precedence: @code{5+(6/2)} or 
@code{(5+6)/2}.
-Furthermore, if you need to leave a value for later processing, you will need 
to define a variable for it; for example @code{a=(5+6)/2}.
+Furthermore, if you need to leave a value for later processing, you will need 
to define a variable for it; for example, @code{a=(5+6)/2}.
 
 Gnuastro provides libraries where you can also use infix notation in C or C++ 
programs.
 However, Gnuastro's programs are primarily designed to be run on the 
command-line and the level of complexity that infix notation requires can be 
annoying/confusing to write on the command-line (where they can get confused 
with the shell's parenthesis or variable definitions).
@@ -16080,7 +16081,7 @@ For more on the range (for integers) and precision (for 
floats), see @ref{Numeri
 
 There is a price to be paid for this improved efficiency in integers: your 
wisdom!
 If you have not selected your types wisely, strange situations may happen.
-For example try the command below:
+for example, try the command below:
 
 @example
 $ astarithmetic 125 10 +
@@ -16128,8 +16129,8 @@ To be optimal, we therefore trust that you (the wise 
Gnuastro user!) make the ap
 In this section, list of recognized operators in Arithmetic (and the Table 
program's @ref{Column arithmetic}) and discussed in detail with examples.
 As mentioned before, to be able to easily do complex operations on the 
command-line, the Reverse Polish Notation is used (where you write 
`@mymath{4\quad5\quad+}' instead of `@mymath{4 + 5}'), if you are not already 
familiar with it, before continuing, please see @ref{Reverse polish notation}.
 
-The operands to all operators can be a data array (for example a FITS image or 
data cube) or a number, the output will be an array or number according to the 
inputs.
-For example a number multiplied by an array will produce an array.
+The operands to all operators can be a data array (for example, a FITS image 
or data cube) or a number, the output will be an array or number according to 
the inputs.
+for example, a number multiplied by an array will produce an array.
 The numerical data type of the output of each operator is described within it.
 
 @cartouche
@@ -16148,22 +16149,22 @@ $ astarithmetic input.fits nan eq 
--output=all-zeros.fits
 @end example
 
 @noindent
-Note that on the command-line you can write NaN in any case (for example 
@command{NaN}, or @command{NAN} are also acceptable).
+Note that on the command-line you can write NaN in any case (for example, 
@command{NaN}, or @command{NAN} are also acceptable).
 Reading NaN as a floating point number in Gnuastro is not case-sensitive.
 @end cartouche
 
 @menu
-* Basic mathematical operators::  For example +, -, /, log, pow, and etc.
+* Basic mathematical operators::  for example, +, -, /, log, pow, and etc.
 * Trigonometric and hyperbolic operators::  sin, cos, atan, asinh, and etc
 * Constants::                   Physical and Mathematical constants.
 * Unit conversion operators::   Various unit conversions necessary.
-* Statistical operators::       Statistics of a single dataset (for example 
mean).
+* Statistical operators::       Statistics of a single dataset (for example, 
mean).
 * Stacking operators::          Coadding or combining multiple datasets into 
one.
 * Filtering operators::         Smoothing a dataset through mixing pixel with 
neighbors.
 * Interpolation operators::     Giving blank pixels a value.
 * Dimensionality changing operators::  Collapse or expand a dataset.
 * Conditional operators::       Select certain pixels within the dataset.
-* Mathematical morphology operators::  Work on binary images, for example 
erode.
+* Mathematical morphology operators::  Work on binary images, for example, 
erode.
 * Bitwise operators::           Work on bits within one pixel.
 * Numerical type conversion operators::  Convert the numeric datatype of a 
dataset.
 * Random number generators::    Random numbers can be used to add noise for 
example.
@@ -16183,12 +16184,12 @@ If you are new to Gnuastro, just read the description 
of each carefully.
 
 @item +
 Addition, so ``@command{4 5 +}'' is equivalent to @mymath{4+5}.
-For example in the command below, the value 20000 is added to each pixel's 
value in @file{image.fits}:
+for example, in the command below, the value 20000 is added to each pixel's 
value in @file{image.fits}:
 @example
 $ astarithmetic 20000 image.fits +
 @end example
 You can also use this operator is to sum the values of one pixel in two images 
(which have to be the same size).
-For example in the commands below (which are identical, see paragraph after 
the commands), each pixel of @file{sum.fits} is the sum of the same pixel's 
values in @file{a.fits} and @file{b.fits}.
+for example, in the commands below (which are identical, see paragraph after 
the commands), each pixel of @file{sum.fits} is the sum of the same pixel's 
values in @file{a.fits} and @file{b.fits}.
 @example
 $ astarithmetic a.fits b.fits + -h1 -h1 --output=sum.fits
 $ astarithmetic a.fits b.fits + -g1     --output=sum.fits
@@ -16196,7 +16197,7 @@ $ astarithmetic a.fits b.fits + -g1     
--output=sum.fits
 The HDU/extension has to be specified for each image with @option{-h}.
 However, if the HDUs are the same in all inputs, you can use @option{-g} to 
only specify the HDU once
 
-If you need to add more than one dataset, one way is to use this operator 
multiple times, for example see the two commands below that are identical in 
the Reverse Polish Notation (@ref{Reverse polish notation}):
+If you need to add more than one dataset, one way is to use this operator 
multiple times, for example, see the two commands below that are identical in 
the Reverse Polish Notation (@ref{Reverse polish notation}):
 @example
 $ astarithmetic a.fits b.fits + c.fits + -osum.fits
 $ astarithmetic a.fits b.fits c.fits + + -osum.fits
@@ -16214,7 +16215,7 @@ $ astarithmetic a.fits b.fits - -g1 --output=sub.fits
 
 @item x
 Multiplication, so ``@command{4 5 x}'' is equivalent to @mymath{4\times5}.
-For example in the command below, the value of each output pixel is 5 times 
its value in @file{image.fits}:
+for example, in the command below, the value of each output pixel is 5 times 
its value in @file{image.fits}:
 @example
 $ astarithmetic image.fits 5 x
 @end example
@@ -16235,7 +16236,7 @@ $ astarithmetic a.fits b.fits / -g1 –output=div.fits
 Modulo (remainder), so ``@command{3 2 %}'' will return @mymath{1}.
 Note that the modulo operator only works on integer types (see @ref{Numeric 
data types}).
 This operator is therefore not defined for most processed astronomical 
astronomical images that have floating-point value.
-However it is useful in labeled images, for example @ref{Segment output}).
+However it is useful in labeled images, for example, @ref{Segment output}).
 In such cases, each pixel is the integer label of the object it is associated 
with hence with the example command below, we can change the labels to only be 
between 1 and 4 and decrease all objects on the image to 4/5th (all objects 
with a label that is a multiple of 5 will be set to 0).
 @example
 $ astarithmetic label.fits 5 1 %
@@ -16243,7 +16244,7 @@ $ astarithmetic label.fits 5 1 %
 
 @item abs
 Absolute value of first operand, so ``@command{4 abs}'' is equivalent to 
@mymath{|4|}.
-For example the output of the command bellow will not have any negative pixels 
(all negative pixels will be multiplied by @mymath{-1} to become positive)
+for example, the output of the command bellow will not have any negative 
pixels (all negative pixels will be multiplied by @mymath{-1} to become 
positive)
 @example
 $ astarithmetic image.fits abs
 @end example
@@ -16251,7 +16252,7 @@ $ astarithmetic image.fits abs
 
 @item pow
 First operand to the power of the second, so ``@command{4.3 5 pow}'' is 
equivalent to @mymath{4.3^{5}}.
-For example with the command below all pixels will be squared
+for example, with the command below all pixels will be squared
 @example
 $ astarithmetic image.fits 2 pow
 @end example
@@ -16261,8 +16262,8 @@ The square root of the first operand, so ``@command{5 
sqrt}'' is equivalent to @
 Since the square root is only defined for positive values, any negative-valued 
pixel will become NaN (blank).
 The output will have a floating point type, but its precision is determined 
from the input: if the input is a 64-bit floating point, the output will also 
be 64-bit.
 Otherwise, the output will be 32-bit floating point (see @ref{Numeric data 
types} for the respective precision).
-Therefore if you require 64-bit precision in estimating the square root, 
convert the input to 64-bit floating point first, for example with @code{5 
float64 sqrt}.
-For example each pixel of the output of the command below will be the square 
root of that pixel in the input.
+Therefore if you require 64-bit precision in estimating the square root, 
convert the input to 64-bit floating point first, for example, with @code{5 
float64 sqrt}.
+for example, each pixel of the output of the command below will be the square 
root of that pixel in the input.
 @example
 $ astarithmetic image.fits sqrt
 @end example
@@ -16282,7 +16283,7 @@ $ astarithmetic image.fits set-i i i minvalue - sqrt
 @item log
 Natural logarithm of first operand, so ``@command{4 log}'' is equivalent to 
@mymath{ln(4)}.
 Negative pixels will become NaN, and the output type is determined from the 
input, see the explanation under @command{sqrt} for more on these features.
-For example the command below will take the natural logarithm of every pixel 
in the input.
+for example, the command below will take the natural logarithm of every pixel 
in the input.
 @example
 $ astarithmetic image.fits log --output=log.fits
 @end example
@@ -16290,7 +16291,7 @@ $ astarithmetic image.fits log --output=log.fits
 Basic mathematical operatorsTrigonometric and hyperbolic operators@item log10
 Base-10 logarithm of first popped operand, so ``@command{4 log}'' is 
equivalent to @mymath{log_{10}(4)}.
 Negative pixels will become NaN, and the output type is determined from the 
input, see the explanation under @command{sqrt} for more on these features.
-For example the command below will take the base-10 logarithm of every pixel 
in the input.
+for example, the command below will take the base-10 logarithm of every pixel 
in the input.
 @example
 $ astarithmetic image.fits log10
 @end example
@@ -16321,7 +16322,7 @@ They take one operand and the returned values are in 
units of degrees.
 Inverse tangent (output in units of degrees) that uses the signs of the input 
coordinates to distinguish between the quadrants.
 This operator therefore needs two operands: the first popped operand is 
assumed to be the X axis position of the point, and the second popped operand 
is its Y axis coordinate.
 
-For example see the commands below.
+for example, see the commands below.
 To be more clear, we are using Table's @ref{Column arithmetic} which uses 
exactly the same internal library function as the Arithmetic program for images.
 We are showing the results for four points in the four quadrants of the 2D 
space (if you want to try running them, you do not need to type/copy the parts 
after @key{#}).
 The first point (2,2) is in the first quadrant, therefore the returned angle 
is 45 degrees.
@@ -16420,7 +16421,7 @@ See 
@url{https://en.wikipedia.org/wiki/Fine-structure_constant, Wikipedia}.
 @node Unit conversion operators, Statistical operators, Constants, Arithmetic 
operators
 @subsubsection Unit conversion operators
 
-It often happens that you have data in one unit (for example magnitudes to 
measure the brightness of a galaxy), but would like to convert it into another 
(for example electron counts on your CCD).
+It often happens that you have data in one unit (for example, magnitudes to 
measure the brightness of a galaxy), but would like to convert it into another 
(for example, electron counts on your CCD).
 While the equations for the unit conversions can be easily found on the 
internet, the operators in this section are designed to simplify the process 
and let you do it easily.
 
 @table @command
@@ -16429,7 +16430,7 @@ While the equations for the unit conversions can be 
easily found on the internet
 Convert counts (usually CCD outputs) to magnitudes using the given zero point.
 The zero point is the first popped operand and the count image or value is the 
second popped operand.
 
-For example assume you have measured the standard deviation of the noise in an 
image to be @code{0.1} counts, and the image's zero point is @code{22.5} and 
you want to measure the @emph{per-pixel} surface brightness limit of the 
dataset@footnote{The @emph{per-pixel} surface brightness limit is the magnitude 
of the noise standard deviation. For more on surface brightness see 
@ref{Brightness flux magnitude}.
+for example, assume you have measured the standard deviation of the noise in 
an image to be @code{0.1} counts, and the image's zero point is @code{22.5} and 
you want to measure the @emph{per-pixel} surface brightness limit of the 
dataset@footnote{The @emph{per-pixel} surface brightness limit is the magnitude 
of the noise standard deviation. For more on surface brightness see 
@ref{Brightness flux magnitude}.
 In the example command, because the output is a single number, we are using 
@option{--quiet} to avoid printing extra information.}.
 To apply this operator on an image, simply replace @code{0.1} with the image 
name, as described below.
 
@@ -16443,7 +16444,7 @@ For an example of applying this operator on an image, 
see the description of sur
 @item mag-to-counts
 Convert magnitudes to counts (usually CCD outputs) using the given zero point.
 The zero point is the first popped operand and the magnitude value is the 
second.
-For example if an object has a magnitude of 20, you can estimate the counts 
corresponding to it (when the image has a zero point of 24.8) with this command:
+for example, if an object has a magnitude of 20, you can estimate the counts 
corresponding to it (when the image has a zero point of 24.8) with this command:
 Note that because the output is a single number, we are using @option{--quiet} 
to avoid printing extra information.
 
 @example
@@ -16456,7 +16457,7 @@ The first popped operand is the area (in 
arcsec@mymath{^2}), the second popped o
 Estimating the surface brightness involves taking the logarithm.
 Therefore this operator will produce NaN for counts with a negative value.
 
-For example with the commands below, we read the zeropoint from the image 
headers (assuming it is in the @code{ZPOINT} keyword), we calculate the pixel 
area from the image itself, and we call this operator to convert the image 
pixels (in counts) to surface brightness (mag/arcsec@mymath{^2}).
+for example, with the commands below, we read the zeropoint from the image 
headers (assuming it is in the @code{ZPOINT} keyword), we calculate the pixel 
area from the image itself, and we call this operator to convert the image 
pixels (in counts) to surface brightness (mag/arcsec@mymath{^2}).
 
 @example
 $ zeropoint=$(astfits image.fits --keyvalue=ZPOINT -q)
@@ -16495,7 +16496,7 @@ For the full equation and basic definitions, see 
@ref{Brightness flux magnitude}
 
 @cindex SDSS
 @cindex Nanomaggy
-For example SDSS images are calibrated in units of nanomaggies, with a fixed 
zero point magnitude of 22.5.
+for example, SDSS images are calibrated in units of nanomaggies, with a fixed 
zero point magnitude of 22.5.
 Therefore you can convert the units of SDSS image pixels to Janskys with the 
command below:
 
 @example
@@ -16541,7 +16542,7 @@ Convert Light-years (LY) to Parsecs (PCs).
 This operator takes a single argument which is interpreted to be the input LYs.
 The conversion is done from IAU's definition of the light-year 
(9460730472580800 m @mymath{\approx} 63241.077 AU = 0.306601 PC, for the 
conversion of AU to PC, see the description of @code{au-to-pc}).
 
-For example the distance of Andromeda galaxy to our galaxy is 2.5 million 
light-years, so its distance in kilo-Parsecs can be calculated with the command 
below (note that we want the output in kilo-parsecs, so we are dividing the 
output of this operator by 1000):
+for example, the distance of Andromeda galaxy to our galaxy is 2.5 million 
light-years, so its distance in kilo-Parsecs can be calculated with the command 
below (note that we want the output in kilo-parsecs, so we are dividing the 
output of this operator by 1000):
 
 @example
 echo 2.5e6 | asttable -c'arith $1 ly-to-pc 1000 /'
@@ -16574,10 +16575,10 @@ The operators in this section take a single dataset 
as input, and will return th
 Minimum value in the first popped operand, so ``@command{a.fits minvalue}'' 
will push the minimum pixel value in this image onto the stack.
 When this operator acts on a single image, the output (operand that is put 
back on the stack) will no longer be an image, but a number.
 The output of this operand is in the same type as the input.
-This operator is mainly intended for multi-element datasets (for example 
images or data cubes), if the popped operand is a number, it will just return 
it without any change.
+This operator is mainly intended for multi-element datasets (for example, 
images or data cubes), if the popped operand is a number, it will just return 
it without any change.
 
 Note that when the final remaining/output operand is a single number, it is 
printed onto the standard output.
-For example with the command below the minimum pixel value in 
@file{image.fits} will be printed in the terminal:
+for example, with the command below the minimum pixel value in 
@file{image.fits} will be printed in the terminal:
 @example
 $ astarithmetic image.fits minvalue
 @end example
@@ -16641,7 +16642,7 @@ But you can use @option{--onedasimage} or 
@option{--onedonstdout} to respectivel
 
 Although you can use this operator on the floating point dataset, due to 
floating-point errors it may give non-reasonable values: because the tenth 
digit of the decimal point is also considered although it may be statistically 
meaningless, see @ref{Numeric data types}.
 It is therefore better/recommended to use it on the integer dataset like the 
labeled images of @ref{Segment output} where each pixel has the integer label 
of the object/clump it is associated with.
-For example let's assume you have cropped a region of a larger labeled image 
and want to find the labels/objects that are within the crop.
+for example, let's assume you have cropped a region of a larger labeled image 
and want to find the labels/objects that are within the crop.
 With this operator, this job is trivial:
 @example
 $ astarithmetic seg-crop.fits unique
@@ -16654,7 +16655,7 @@ Since the blank pixels are being removed, the output 
dataset will always be sing
 Recall that by default, single-dimensional datasets are stored as a table 
column in the output.
 But you can use @option{--onedasimage} or @option{--onedonstdout} to 
respectively store them as a single-dimensional FITS array/image, or to print 
them on the standard output.
 
-For example with the command below, the non-blank pixel values of 
@file{cropped.fits} are printed on the command-line (the @option{--quiet} 
option is used to remove the extra information that Arithmetic prints as it 
reads the inputs, its version and its running time).
+for example, with the command below, the non-blank pixel values of 
@file{cropped.fits} are printed on the command-line (the @option{--quiet} 
option is used to remove the extra information that Arithmetic prints as it 
reads the inputs, its version and its running time).
 
 @example
 $ astarithmetic cropped.fits nonblank --onedonstdout --quiet
@@ -16694,7 +16695,7 @@ All the subsequently popped operands must have the same 
type and size.
 This operator (and all the variable-operand operators similar to it that are 
discussed below) will work in multi-threaded mode unless Arithmetic is called 
with the @option{--numthreads=1} option, see @ref{Multi-threaded operations}.
 
 Each pixel of the output of the @code{min} operator will be given the minimum 
value of the same pixel from all the popped operands/images.
-For example the following command will produce an image with the same size and 
type as the three inputs, but each output pixel value will be the minimum of 
the same pixel's values in all three input images.
+for example, the following command will produce an image with the same size 
and type as the three inputs, but each output pixel value will be the minimum 
of the same pixel's values in all three input images.
 
 @example
 $ astarithmetic a.fits b.fits c.fits 3 min --output=min.fits
@@ -16708,7 +16709,7 @@ NaN/blank pixels will be ignored, see @ref{Blank 
pixels}.
 
 @item
 The output will have the same type as the inputs.
-This is natural for the @command{min} and @command{max} operators, but for 
other similar operators (for example @command{sum}, or @command{average}) the 
per-pixel operations will be done in double precision floating point and then 
stored back in the input type.
+This is natural for the @command{min} and @command{max} operators, but for 
other similar operators (for example, @command{sum}, or @command{average}) the 
per-pixel operations will be done in double precision floating point and then 
stored back in the input type.
 Therefore, if the input was an integer, C's internal type conversion will be 
used.
 
 @item
@@ -16797,7 +16798,7 @@ This operator will combine the specified number of 
inputs into a single output t
 This operator is very similar to @command{min}, with the exception that it 
expects two operands (parameters for sigma-clipping) before the total number of 
inputs.
 The first popped operand is the termination criteria and the second is the 
multiple of @mymath{\sigma}.
 
-For example in the command below, the first popped operand (@command{0.2}) is 
the sigma clipping termination criteria.
+for example, in the command below, the first popped operand (@command{0.2}) is 
the sigma clipping termination criteria.
 If the termination criteria is larger than, or equal to, 1 it is interpreted 
as the number of clips to do.
 But if it is between 0 and 1, then it is the tolerance level on the standard 
deviation (see @ref{Sigma clipping}).
 The second popped operand (@command{5}) is the multiple of sigma to use in 
sigma-clipping.
@@ -16882,7 +16883,7 @@ This is very similar to @code{filter-mean}, except that 
all outliers (identified
 As described there, two extra input parameters are necessary for 
@mymath{\sigma}-clipping: the multiple of @mymath{\sigma} and the termination 
criteria.
 @code{filter-sigclip-mean} therefore needs to pop two other operands from the 
stack after the dimensions of the box.
 
-For example the line below uses the same box size as the example of 
@code{filter-mean}.
+for example, the line below uses the same box size as the example of 
@code{filter-mean}.
 However, all elements in the box that are iteratively beyond @mymath{3\sigma} 
of the distribution's median are removed from the final calculation of the mean 
until the change in @mymath{\sigma} is less than @mymath{0.2}.
 
 @example
@@ -16890,7 +16891,7 @@ $ astarithmetic 3 0.2 5 4 image.fits filter-sigclip-mean
 @end example
 
 The median (which needs a sorted dataset) is necessary for 
@mymath{\sigma}-clipping, therefore @code{filter-sigclip-mean} can be 
significantly slower than @code{filter-mean}.
-However, if there are strong outliers in the dataset that you want to ignore 
(for example emission lines on a spectrum when finding the continuum), this is 
a much better solution.
+However, if there are strong outliers in the dataset that you want to ignore 
(for example, emission lines on a spectrum when finding the continuum), this is 
a much better solution.
 
 @item filter-sigclip-median
 Apply a @mymath{\sigma}-clipped median filtering onto the input dataset.
@@ -16912,13 +16913,13 @@ The number of the nearest non-blank neighbors used to 
calculate the median is gi
 The distance of the nearest non-blank neighbors is irrelevant in this 
interpolation.
 The neighbors of each blank pixel will be parsed in expanding circular rings 
(for 2D images) or spherical surfaces (for 3D cube) and each non-blank element 
over them is stored in memory.
 When the requested number of non-blank neighbors have been found, their median 
is used to replace that blank element.
-For example the line below replaces each blank element with the median of the 
nearest 5 pixels.
+for example, the line below replaces each blank element with the median of the 
nearest 5 pixels.
 
 @example
 $ astarithmetic image.fits 5 interpolate-medianngb
 @end example
 
-When you want to interpolate blank regions and you want each blank region to 
have a fixed value (for example the centers of saturated stars) this operator 
is not good.
+When you want to interpolate blank regions and you want each blank region to 
have a fixed value (for example, the centers of saturated stars) this operator 
is not good.
 Because the pixels used to interpolate various parts of the region differ.
 For such scenarios, you may use @code{interpolate-maxofregion} or 
@code{interpolate-inofregion} (described below).
 
@@ -16933,7 +16934,7 @@ One useful implementation of this operator is to fill 
the saturated pixels of st
 Interpolate all blank regions (consisting of many blank pixels that are 
touching) in the second popped operand with the minimum value of the pixels 
that are immediately bordering that region (a single value).
 The first popped operand is the connectivity (see description in 
@command{connected-components}).
 
-For example with the command below all the connected blank regions of 
@file{image.fits} will be filled.
+for example, with the command below all the connected blank regions of 
@file{image.fits} will be filled.
 Its an image (2D dataset), so a 2 connectivity means that the independent 
blank regions are defined by 8-connected neighbors.
 If connectivity was 1, the regions would be defined by 4-connectivity: blank 
regions that may only be touching on the corner of one pixel would be 
identified as separate regions.
 
@@ -17030,7 +17031,7 @@ Therefore, when the WCS is important for later 
processing, be sure that the inpu
 @cindex IFU: Integral Field Unit
 @cindex Integral field unit (IFU)
 One common application of this operator is the creation of pseudo broad-band 
or narrow-band 2D images from 3D data cubes.
-For example integral field unit (IFU) data products that have two spatial 
dimensions (first two FITS dimensions) and one spectral dimension (third FITS 
dimension).
+for example, integral field unit (IFU) data products that have two spatial 
dimensions (first two FITS dimensions) and one spectral dimension (third FITS 
dimension).
 The command below will collapse the whole third dimension into a 2D array the 
size of the first two dimensions, and then convert the output to 
single-precision floating point (as discussed above).
 
 @example
@@ -17191,7 +17192,7 @@ Both operands have to be the same kind: either both 
images or both numbers and i
 $ astarithmetic image1.fits image2.fits -g1 and
 @end example
 
-For example if you only want to see which pixels in an image have a value 
@emph{between} 50 (greater equal, or inclusive) and 200 (less than, or 
exclusive), you can use this command:
+for example, if you only want to see which pixels in an image have a value 
@emph{between} 50 (greater equal, or inclusive) and 200 (less than, or 
exclusive), you can use this command:
 @example
 $ astarithmetic image.fits set-i i 50 ge i 200 lt and
 @end example
@@ -17201,7 +17202,7 @@ Logical OR: returns 1 if either one of the operands is 
non-zero and 0 only when
 Both operands have to be the same kind: either both images or both numbers.
 The usage is similar to @code{and}.
 
-For example if you only want to see which pixels in an image have a value 
@emph{outside of} -100 (greater equal, or inclusive) and 200 (less than, or 
exclusive), you can use this command:
+for example, if you only want to see which pixels in an image have a value 
@emph{outside of} -100 (greater equal, or inclusive) and 200 (less than, or 
exclusive), you can use this command:
 @example
 $ astarithmetic image.fits set-i i -100 lt i 200 ge or
 @end example
@@ -17209,7 +17210,7 @@ $ astarithmetic image.fits set-i i -100 lt i 200 ge or
 @item not
 Logical NOT: returns 1 when the operand is 0 and 0 when the operand is 
non-zero.
 The operand can be an image or number, for an image, it is applied to each 
pixel separately.
-For example if you want to know which pixels are not blank, you can use not on 
the output of the @command{isblank} operator described below:
+for example, if you want to know which pixels are not blank, you can use not 
on the output of the @command{isblank} operator described below:
 @example
 $ astarithmetic image.fits isblank not
 @end example
@@ -17244,7 +17245,7 @@ The 2nd popped operand (@file{binary.fits}) has to have 
@code{uint8} (or @code{u
 It is treated as a binary dataset (with only two values: zero and non-zero, 
hence the name @code{binary.fits} in this example).
 However, commonly you will not be dealing with an actual FITS file of a 
condition/binary image.
 You will probably define the condition in the same run based on some other 
reference image and use the conditional and logical operators above to make a 
true/false (or one/zero) image for you internally.
-For example the case below:
+for example, the case below:
 
 @example
 $ astarithmetic in.fits reference.fits 100 gt new.fits where
@@ -17276,8 +17277,8 @@ $ astarithmetic in.fits reference.fits 100 gt nan where
 @cindex Mathematical morphology
 From Wikipedia: ``Mathematical morphology (MM) is a theory and technique for 
the analysis and processing of geometrical structures, based on set theory, 
lattice theory, topology, and random functions. MM is most commonly applied to 
digital images''.
 In theory it extends a very large body of research and methods in image 
processing, but currently in Gnuastro it mainly applies to images that are 
binary (only have a value of 0 or 1).
-For example you have applied the greater-than operator (@code{gt}, see 
@ref{Conditional operators}) to select all pixels in your image that are larger 
than a value of 100.
-But they will all have a value of 1, and you want to separate the various 
groups of pixels that are connected (for example peaks of stars in your image).
+for example, you have applied the greater-than operator (@code{gt}, see 
@ref{Conditional operators}) to select all pixels in your image that are larger 
than a value of 100.
+But they will all have a value of 1, and you want to separate the various 
groups of pixels that are connected (for example, peaks of stars in your image).
 With the @code{connected-components} operator, you can give each connected 
region of the output of @code{gt} a separate integer label.
 
 @table @command
@@ -17333,7 +17334,7 @@ This operator will return a labeled dataset where the 
non-zero pixels in the inp
 
 The connectivity is a number between 1 and the number of dimensions in the 
dataset (inclusive).
 1 corresponds to the weakest (symmetric) connectivity between elements and the 
number of dimensions the strongest.
-For example on a 2D image, a connectivity of 1 corresponds to 4-connected 
neighbors and 2 corresponds to 8-connected neighbors.
+for example, on a 2D image, a connectivity of 1 corresponds to 4-connected 
neighbors and 2 corresponds to 8-connected neighbors.
 
 One example usage of this operator can be the identification of regions above 
a certain threshold, as in the command below.
 With this command, Arithmetic will first separate all pixels greater than 100 
into a binary image (where pixels with a value of 1 are above that value).
@@ -17348,7 +17349,7 @@ If your input dataset does not have a binary type, but 
you know all its values a
 @item fill-holes
 Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
 This operator takes two operands (similar to @code{connected-components}): the 
second is the binary (0 or 1 valued) dataset to fill holes in and the first 
popped operand is the connectivity (to define a hole).
-Imagine that in your dataset there are some holes with zero value inside the 
objects with one value (for example the output of the thresholding example of 
@command{erode}) and you want to fill the holes:
+Imagine that in your dataset there are some holes with zero value inside the 
objects with one value (for example, the output of the thresholding example of 
@command{erode}) and you want to fill the holes:
 @example
 $ astarithmetic binary.fits 2 fill-holes
 @end example
@@ -17370,7 +17371,7 @@ $ astarithmetic image.fits invert
 
 @cindex Bitwise operators
 Astronomical images are usually stored as an array multi-byte pixels with 
different sizes for different precision levels (see @ref{Numeric data types}).
-For example images from CCDs are usually in the unsigned 16-bit integer type 
(each pixel takes 16 bits, or 2 bytes, of memory) and fully reduced deep images 
have a 32-bit floating point type (each pixel takes 32 bits or 4 bytes).
+for example, images from CCDs are usually in the unsigned 16-bit integer type 
(each pixel takes 16 bits, or 2 bytes, of memory) and fully reduced deep images 
have a 32-bit floating point type (each pixel takes 32 bits or 4 bytes).
 
 On the other hand, during the data reduction, we need to preserve a lot of 
meta-data about some pixels.
 For example, if a cosmic ray had hit the pixel during the exposure, or if the 
pixel was saturated, or is known to have a problem, or if the optical 
vignetting is too strong on it, and etc.
@@ -17387,13 +17388,13 @@ So if a pixel is only affected by cosmic rays, it 
will have this sequence of bit
 The second bit shows that the pixel was saturated (@code{00000010}), the third 
bit shows that it has known problems (@code{00000100}) and the fourth bit shows 
that it was affected by vignetting (@code{00001000}).
 
 Since each bit is independent, we can thus mark multiple metadata about that 
pixel in the actual image, within a single ``flag'' or ``mask'' pixel of a flag 
or mask image that has the same number of pixels.
-For example a flag-pixel with the following bits @code{00001001} shows that it 
has been affected by cosmic rays @emph{and} it has been affected by vignetting 
at the same time.
+for example, a flag-pixel with the following bits @code{00001001} shows that 
it has been affected by cosmic rays @emph{and} it has been affected by 
vignetting at the same time.
 The common data type to store these flagging pixels are unsigned integer types 
(see @ref{Numeric data types}).
 Therefore when you open an unsigned 8-bit flag image in a viewer like DS9, you 
will see a single integer in each pixel that actually has 8 layers of metadata 
in it!
-For example the integer you will see for the bit sequences given above will 
respectively be: @mymath{2^0=1} (for a pixel that only has cosmic ray), 
@mymath{2^1=2} (for a pixel that was only saturated), @mymath{2^2=4} (for a 
pixel that only has known problems), @mymath{2^3=8} (for a pixel that is only 
affected by vignetting) and @mymath{2^0 + 2^3 = 9} (for a pixel that has a 
cosmic ray @emph{and} was affected by vignetting).
+for example, the integer you will see for the bit sequences given above will 
respectively be: @mymath{2^0=1} (for a pixel that only has cosmic ray), 
@mymath{2^1=2} (for a pixel that was only saturated), @mymath{2^2=4} (for a 
pixel that only has known problems), @mymath{2^3=8} (for a pixel that is only 
affected by vignetting) and @mymath{2^0 + 2^3 = 9} (for a pixel that has a 
cosmic ray @emph{and} was affected by vignetting).
 
 You can later use this bit information to mark objects in your final analysis 
or to mask certain pixels.
-For example you may want to set all pixels affected by vignetting to NaN, but 
can interpolate over cosmic rays.
+for example, you may want to set all pixels affected by vignetting to NaN, but 
can interpolate over cosmic rays.
 You therefore need ways to separate the pixels with a desired flag(s) from the 
rest.
 It is possible to treat a flag pixel as a single integer (and try to define 
certain ranges in value to select certain flags).
 But a much more easier and robust way is to actually look at each pixel as a 
sequence of bits (not as a single integer!) and use the bitwise operators below 
for this job.
@@ -17403,34 +17404,34 @@ For more on the theory behind bitwise operators, see 
@url{https://en.wikipedia.o
 @table @command
 @item bitand
 Bitwise AND operator: only bits with values of 1 in both popped operands will 
get the value of 1, the rest will be set to 0.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00100000}.
 Note that the bitwise operators only work on integer type datasets.
 
 @item bitor
 Bitwise inclusive OR operator: The bits where at least one of the two popped 
operands has a 1 value get a value of 1, the others 0.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00101010}.
 Note that the bitwise operators only work on integer type datasets.
 
 @item bitxor
 Bitwise exclusive OR operator: A bit will be 1 if it differs between the two 
popped operands.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 00100010 bitand} will give @code{00001010}.
 Note that the bitwise operators only work on integer type datasets.
 
 @item lshift
 Bitwise left shift operator: shift all the bits of the first operand to the 
left by a number of times given by the second operand.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 lshift} will give @code{10100000}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 lshift} will give @code{10100000}.
 This is equivalent to multiplication by 4.
 Note that the bitwise operators only work on integer type datasets.
 
 @item rshift
 Bitwise right shift operator: shift all the bits of the first operand to the 
right by a number of times given by the second operand.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 rshift} will give @code{00001010}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 2 rshift} will give @code{00001010}.
 Note that the bitwise operators only work on integer type datasets.
 
 @item bitnot
 Bitwise not (more formally known as one's complement) operator: flip all the 
bits of the popped operand (note that this is the only unary, or single 
operand, bitwise operator).
 In other words, any bit with a value of @code{0} is changed to @code{1} and 
vice-versa.
-For example (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 bitnot} will give @code{11010111}.
+for example, (assuming numbers can be written as bit strings on the 
command-line): @code{00101000 bitnot} will give @code{11010111}.
 Note that the bitwise operators only work on integer type datasets/numbers.
 @end table
 
@@ -17477,7 +17478,7 @@ The internal conversion of C will be used.
 @item float32
 Convert the type of the popped operand to 32-bit (single precision) floating 
point (see @ref{Numeric data types}).
 The internal conversion of C will be used.
-For example if @file{f64.fits} is a 64-bit floating point image, and you want 
to store it as a 32-bit floating point image, you can use the command below 
(the second command is to show that the output file consumes half the storage)
+for example, if @file{f64.fits} is a 64-bit floating point image, and you want 
to store it as a 32-bit floating point image, you can use the command below 
(the second command is to show that the output file consumes half the storage)
 
 @example
 $ astarithmetic f64.fits float32 --output=f32.fits
@@ -17492,7 +17493,7 @@ The internal conversion of C will be used.
 @node Random number generators, Elliptical shape operators, Numerical type 
conversion operators, Arithmetic operators
 @subsubsection Random number generators
 
-When you simulate data (for example see @ref{Sufi simulates a detection}), 
everything is ideal and there is no noise!
+When you simulate data (for example, see @ref{Sufi simulates a detection}), 
everything is ideal and there is no noise!
 The final step of the process is to add simulated noise to the data.
 The operators in this section are designed for that purpose.
 
@@ -17506,7 +17507,7 @@ When @option{--quiet} is not given, a statement will be 
printed on each invocati
 It will show the random number generator function and seed that was used in 
that invocation, see @ref{Generating random numbers}.
 Reproducibility of the outputs can be ensured with the @option{--envseed} 
option, see below for more.
 
-For example with the first command below, @file{image.fits} will be degraded 
by a noise of standard deviation 3 units.
+for example, with the first command below, @file{image.fits} will be degraded 
by a noise of standard deviation 3 units.
 @example
 $ astarithmetic image.fits 3 mknoise-sigma
 @end example
@@ -17543,7 +17544,7 @@ But this feature has a side-effect: that if the order 
of calling the @code{mknoi
 If you need this feature please send us an email at 
@code{bug-gnuastro@@gnu.org} (to motivate us in its implementation).}.
 
 In case each data element should have an independent sigma, the first popped 
operand can be a dataset of the same size as the second.
-In this case, for each element, a different noise measure (for example sigma 
in @code{mknoise-sigma}) will be used.
+In this case, for each element, a different noise measure (for example, sigma 
in @code{mknoise-sigma}) will be used.
 
 @item mknoise-poisson
 @cindex Poisson noise
@@ -17554,11 +17555,11 @@ This operator takes two arguments: 1. the first 
popped operand (just before the
 @cindex Dark night
 @cindex Gray night
 @cindex Nights (dark or gray)
-Recall that the background values reported by observatories (for example to 
define dark or gray nights), or in papers, is usually reported in units of 
magnitudes per arcseconds square.
+Recall that the background values reported by observatories (for example, to 
define dark or gray nights), or in papers, is usually reported in units of 
magnitudes per arcseconds square.
 You need to do the conversion to counts per pixel manually.
 The conversion of magnitudes to counts is described below.
 For converting arcseconds squared to number of pixels, you can use the 
@option{--pixelscale} option of @ref{Fits}.
-For example @code{astfits image.fits --pixelscale}.
+for example, @code{astfits image.fits --pixelscale}.
 
 Except for the noise-model, this operator is very similar to 
@code{mknoise-sigma} and the examples there apply here too.
 The main difference with @code{mknoise-sigma} is that in a Poisson 
distribution the scatter/sigma will depend on each element's value.
@@ -17581,7 +17582,7 @@ This operator takes two arguments: the top/first popped 
operand is the width of
 The returned random values may happen to be the minimum interval value, but 
will never be the maximum.
 Except for the noise-model, this operator behaves very similar to 
@code{mknoise-sigma}, see the explanation there for more.
 
-For example with the command below, a random value will be selected between 10 
to 14 (centered on 12, which is the only input data element, with a total width 
of 4).
+for example, with the command below, a random value will be selected between 
10 to 14 (centered on 12, which is the only input data element, with a total 
width of 4).
 
 @example
 echo 12 | asttable -c'arith $1 4 mknoise-uniform'
@@ -17654,7 +17655,7 @@ $ aststatistics random.fits --asciihist 
--numasciibins=50
 As you see, the 10000 pixels in the image only have values 1, 2, 3 or 4 (which 
were the values in the bins column of @file{histogram.txt}), and the number of 
times each of these values occurs follows the @mymath{y=x^2} distribution.
 
 Generally, any value given in the bins column will be used for the final 
output values.
-For example in the command below (for generating a histogram from an 
analytical function), we are adding the bins by 20 (while keeping the same 
probability distribution of @mymath{y=x^2}).
+for example, in the command below (for generating a histogram from an 
analytical function), we are adding the bins by 20 (while keeping the same 
probability distribution of @mymath{y=x^2}).
 If you re-run the Arithmetic command above after this, you will notice that 
the pixels values are now one of the following 21, 22, 23 or 24 (instead of 1, 
2, 3, or 4).
 But the shape of the histogram of the resulting random distribution will be 
unchanged.
 
@@ -17811,7 +17812,7 @@ $ astarithmetic a.fits b.fits pa.fits 
box-around-ellipse \
 Finally, if you need to treat the width and height separately for further 
processing, you can call the @code{set-} operator two times afterwards like 
below.
 Recall that the @code{set-} operator will pop the top operand, and put it in 
memory with a certain name, bringing the next operand to the top of the stack.
 
-For example let's assume @file{catalog.fits} has at least three columns 
@code{MAJOR}, @code{MINOR} and @code{PA} which specify the major axis, minor 
axis and position angle respectively.
+for example, let's assume @file{catalog.fits} has at least three columns 
@code{MAJOR}, @code{MINOR} and @code{PA} which specify the major axis, minor 
axis and position angle respectively.
 But you want the final width and height in 32-bit floating point numbers (not 
the default 64-bit, which may be too much precision in many scenarios).
 You can do this with the command below (note you can also break lines with 
@key{\}, within the single-quote environment)
 
@@ -17836,7 +17837,7 @@ However, in some situations, it is necessary to use 
certain columns of a table i
 @itemx load-col-%-from-%-hdu-%
 Load the requested column (first @command{%}) from the requested file (second 
@command{%}).
 If the file is a FITS file, it is also necessary to specify a HDU using the 
second form (where the HDU identifier is the third @command{%}.
-For example @command{load-col-MAG-from-catalog.fits-hdu-1} will load the 
@code{MAG} column from HDU 1 of @code{catalog.fits}.
+for example, @command{load-col-MAG-from-catalog.fits-hdu-1} will load the 
@code{MAG} column from HDU 1 of @code{catalog.fits}.
 
 For example, let's assume you have the following two tables, and you would 
like to add the first column of the first with the second:
 
@@ -17908,14 +17909,14 @@ It will keep the popped dataset in memory through a 
separate list of named datas
 That list will be used to add/copy any requested dataset to the main 
processing stack when the name is called.
 
 The name to give the popped dataset is part of the operator's name.
-For example the @code{set-a} operator of the command below, gives the name 
``@code{a}'' to the contents of @file{image.fits}.
+for example, the @code{set-a} operator of the command below, gives the name 
``@code{a}'' to the contents of @file{image.fits}.
 This name is then used instead of the actual filename to multiply the dataset 
by two.
 
 @example
 $ astarithmetic image.fits set-a a 2 x
 @end example
 
-The name can be any string, but avoid strings ending with standard filename 
suffixes (for example @file{.fits})@footnote{A dataset name like @file{a.fits} 
(which can be set with @command{set-a.fits}) will cause confusion in the 
initial parser of Arithmetic.
+The name can be any string, but avoid strings ending with standard filename 
suffixes (for example, @file{.fits})@footnote{A dataset name like @file{a.fits} 
(which can be set with @command{set-a.fits}) will cause confusion in the 
initial parser of Arithmetic.
 It will assume this name is a FITS file, and if it is used multiple times, 
Arithmetic will abort, complaining that you have not provided enough HDUs.}.
 
 One example of the usefulness of this operator is in the @code{where} operator.
@@ -17950,7 +17951,7 @@ By default, any file that is given to this operator is 
deleted before Arithmetic
 The deletion can be deactivated with the @option{--dontdelete} option (as in 
all Gnuastro programs, see @ref{Input output options}).
 If the same FITS file is given to this operator multiple times, it will 
contain multiple extensions (in the same order that it was called.
 
-For example the operator @command{tofile-check.fits} will write the top 
operand to @file{check.fits}.
+for example, the operator @command{tofile-check.fits} will write the top 
operand to @file{check.fits}.
 Since it does not modify the operands stack, this operator is very convenient 
when you want to debug, or understanding, a string of operators and operands 
given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to 
see what is happening behind the scenes without modifying the overall process.
 
 @item tofilefree-AAA
@@ -18065,7 +18066,7 @@ HDU/extension to read the WCS within the file given to 
@option{--wcsfile}.
 For more, see the description of @option{--wcsfile}.
 
 @item --envseed
-Use the environment for the random number generator settings in operators that 
need them (for example @code{mknoise-sigma}).
+Use the environment for the random number generator settings in operators that 
need them (for example, @code{mknoise-sigma}).
 This is very important for obtaining reproducible results, for more see 
@ref{Generating random numbers}.
 
 @item -n STR
@@ -18120,7 +18121,7 @@ To force a number to be read as float, put a @code{.} 
after it (possibly followe
 Hence while @command{5} will be read as an integer, @command{5.}, 
@command{5.0} or @command{5f} will be added to the stack as @code{float} (see 
@ref{Reverse polish notation}).
 
 Unless otherwise stated (in @ref{Arithmetic operators}), the operators can 
deal with numeric multiple data types (see @ref{Numeric data types}).
-For example in ``@command{a.fits b.fits +}'', the image types can be 
@code{long} and @code{float}.
+for example, in ``@command{a.fits b.fits +}'', the image types can be 
@code{long} and @code{float}.
 In such cases, C's internal type conversion will be used.
 The output type will be set to the higher-ranking type of the two inputs.
 Unsigned integer types have smaller ranking than their signed counterparts and 
floating point types have higher ranking than the integer types.
@@ -18136,7 +18137,7 @@ for(i=0;i<100;++i) out[i]=a[i]+b[i];
 @noindent
 Relying on the default C type conversion significantly speeds up the 
processing and also requires less RAM (when using very large images).
 
-Some operators can only work on integer types (of any length, for example 
bitwise operators) while others only work on floating point types, (currently 
only the @code{pow} operator).
+Some operators can only work on integer types (of any length, for example, 
bitwise operators) while others only work on floating point types, (currently 
only the @code{pow} operator).
 In such cases, if the operand type(s) are different, an error will be printed.
 Arithmetic also comes with internal type conversion operators which you can 
use to convert the data into the appropriate type, see @ref{Arithmetic 
operators}.
 
@@ -18259,7 +18260,7 @@ If it is not it is known as @emph{Correlation}.
 To be a weighted average, the sum of the weights (the pixels in the kernel) 
have to be unity.
 This will have the consequence that the convolved image of an object and 
unconvolved object will have the same brightness (see @ref{Brightness flux 
magnitude}), which is natural, because convolution should not eat up the object 
photons, it only disperses them.
 
-The convolution of each pixel is indendent of the others pixels, and in some 
cases it may be necessary to convolve different parts of an image separately 
(for example when you have different amplifiers on the CCD).
+The convolution of each pixel is indendent of the others pixels, and in some 
cases it may be necessary to convolve different parts of an image separately 
(for example, when you have different amplifiers on the CCD).
 Therefore, to speed up spatial convolution, Gnuastro first tesselates the 
input into many tiles, and does the convolution in parallel on each tile.
 For more on how Gnuastro's program create the tile grid (tessellation), see 
@ref{Tessellation}.
 
@@ -18334,7 +18335,7 @@ Therefore we have included the mathematical proofs and 
figures so you can have a
 Ever since the ancient times, the circle has been (and still is) the simplest 
shape for abstract comprehension.
 All you need is a center point and a radius and you are done.
 All the points on a circle are at a fixed distance from the center.
-However, the moment you try to connect this elegantly simple and beautiful 
abstract construct (the circle) with the real world (for example compute its 
area or its circumference), things become really hard (ideally, impossible) 
because the irrational number @mymath{\pi} gets involved.
+However, the moment you try to connect this elegantly simple and beautiful 
abstract construct (the circle) with the real world (for example, compute its 
area or its circumference), things become really hard (ideally, impossible) 
because the irrational number @mymath{\pi} gets involved.
 
 The key to understanding the Fourier series (thus the Fourier transform and 
finally the Discrete Fourier Transform) is our ancient desire to express 
everything in terms of circles or the most exceptionally simple and elegant 
abstract human construct.
 Most people prefer to say the same thing in a more ahistorical manner: to 
break a function into sines and cosines.
@@ -18374,7 +18375,7 @@ The main reason we are giving this historical 
background which might appear off
 They offer no physical insight.
 The astronomers who were involved with the Ptolemaic world view had to add a 
huge number of epicycles during the centuries after Ptolemy in order to explain 
more accurate observations.
 Finally the death knell of this world-view was Galileo's observations with his 
new instrument (the telescope).
-So the physical insight, which is what Astronomers and Physicists are 
interested in (as opposed to Mathematicians and Engineers who just like proving 
and optimizing or calculating!) comes from being creative and not limiting our 
selves to such approximations.
+So the physical insight, which is what Astronomers and Physicists are 
interested in (as opposed to Mathematicians and Engineers who just like proving 
and optimizing or calculating!) comes from being creative and not limiting 
ourselves to such approximations.
 Even when they work.
 
 @node Circles and the complex plane, Fourier series, Fourier series historical 
background, Frequency domain and Fourier operations
@@ -18394,12 +18395,12 @@ At each point in time, we take the vertical 
coordinate of the point and use it t
 @cindex Abraham de Moivre
 @cindex de Moivre, Abraham
 Leonhard Euler@footnote{Other forms of this equation were known before Euler.
-For example in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 
-- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
+for example, in 1707 A.D. (the year of Euler's birth) Abraham de Moivre (1667 
-- 1754 A.D.) showed that @mymath{(\cos{x}+i\sin{x})^n=\cos(nx)+i\sin(nx)}.
 In 1714 A.D., Roger Cotes (1682 -- 1716 A.D. a colleague of Newton who 
proofread the second edition of Principia) showed that: 
@mymath{ix=\ln(\cos{x}+i\sin{x})}.}  (1707 -- 1783 A.D.)  showed that the 
complex exponential (@mymath{e^{iv}} where @mymath{v} is real) is periodic and 
can be written as: @mymath{e^{iv}=\cos{v}+isin{v}}.
 Therefore @mymath{e^{iv+2\pi}=e^{iv}}.
 Later, Caspar Wessel (mathematician and cartographer 1745 -- 1818 A.D.)  
showed how complex numbers can be displayed as vectors on a plane.
 Euler's identity might seem counter intuitive at first, so we will try to 
explain it geometrically (for deeper physical insight).
-On the real-imaginary 2D plane (like the left hand plot in each box of 
@ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as 
rotating the point by @mymath{90} degrees (for example the value @mymath{3} on 
the real axis becomes @mymath{3i} on the imaginary axis).
+On the real-imaginary 2D plane (like the left hand plot in each box of 
@ref{iandtime}), multiplying a number by @mymath{i} can be interpreted as 
rotating the point by @mymath{90} degrees (for example, the value @mymath{3} on 
the real axis becomes @mymath{3i} on the imaginary axis).
 On the other hand, @mymath{e\equiv\lim_{n\rightarrow\infty}(1+{1\over n})^n}, 
therefore, defining @mymath{m\equiv nu}, we get:
 
 @dispmath{e^{u}=\lim_{n\rightarrow\infty}\left(1+{1\over n}\right)^{nu}
@@ -18445,7 +18446,7 @@ However to make things easier to understand, here we 
will assume that the signal
 Also for this section and the next (@ref{Fourier transform}) we will be 
talking about the signal before it is digitized or pixelated.
 Let's assume that we have the continuous function @mymath{f(l)} which is 
integrable in the interval @mymath{[l_0, l_0+L]} (always true in practical 
cases like images).
 Take @mymath{l_0} as the position of the first pixel in the assumed row of the 
image and @mymath{L} as the width of the image along that row.
-The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for 
example meters) or an angular unit (like radians) multiplied by a fixed 
distance which is more common.
+The units of @mymath{l_0} and @mymath{L} can be in any spatial units (for 
example, meters) or an angular unit (like radians) multiplied by a fixed 
distance which is more common.
 
 To approximate @mymath{f(l)} over this interval, we need to find a set of 
frequencies and their corresponding `magnitude's (see @ref{Circles and the 
complex plane}).
 Therefore our aim is to show @mymath{f(l)} as the following sum of periodic 
functions:
@@ -18777,7 +18778,7 @@ Such a digitized (sampled) data set is thus called 
@emph{over-sampled}.
 When @mymath{2\pi/P=\Delta\omega}, @mymath{P} is just small enough to finely 
separate even the largest frequencies in the input signal and thus it is known 
as @emph{critically-sampled}.
 Finally if @mymath{2\pi/P<\Delta\omega} we are dealing with an 
@emph{under-sampled} data set.
 In an under-sampled data set, the separate copies of @mymath{F(\omega)} are 
going to overlap and this will deprive us of recovering high constituent 
frequencies of @mymath{f(l)}.
-The effects of under-sampling in an image with high rates of change (for 
example a brick wall imaged from a distance) can clearly be visually seen and 
is known as @emph{aliasing}.
+The effects of under-sampling in an image with high rates of change (for 
example, a brick wall imaged from a distance) can clearly be visually seen and 
is known as @emph{aliasing}.
 
 When the input @mymath{f(l)} is composed of a finite range of frequencies, 
@mymath{f(l)} is known as a @emph{band-limited} function.
 The example in @ref{samplingfreq}(left) was a nice demonstration of such a 
case: for all @mymath{\omega<-\omega_m} or @mymath{\omega>\omega_m}, we have 
@mymath{F(\omega)=0}.
@@ -19061,7 +19062,7 @@ For the full list of options shared by all Gnuastro 
programs, please see @ref{Co
 In particular, in the spatial domain, on a multi-dimensional datasets, 
convolve uses Gnuastro's tessellation to speed up the run, see 
@ref{Tessellation}.
 Common options related to tessellation are described in @ref{Processing 
options}.
 
-1-dimensional datasets (for example spectra) are only read as columns within a 
table (see @ref{Tables} for more on how Gnuastro programs read tables).
+1-dimensional datasets (for example, spectra) are only read as columns within 
a table (see @ref{Tables} for more on how Gnuastro programs read tables).
 Note that currently 1D convolution is only implemented in the spatial domain 
and thus kernel-matching is also not supported.
 
 Here we will only explain the options particular to Convolve.
@@ -19190,7 +19191,7 @@ Here we specifically mean the pixel grid transformation 
which is better conveyed
 
 @cindex Gravitational lensing
 Image wrapping is a very important step in astronomy, both in observational 
data analysis and in simulating modeled images.
-In modeling, warping an image is necessary when we want to apply grid 
transformations to the initial models, for example in simulating gravitational 
lensing.
+In modeling, warping an image is necessary when we want to apply grid 
transformations to the initial models, for example, in simulating gravitational 
lensing.
 Observational reasons for warping an image are listed below:
 
 @itemize
@@ -19299,7 +19300,7 @@ However they are limited to mapping the point 
@mymath{[\matrix{0&0}]} to @mymath
 Therefore they are useless if you want one coordinate to be shifted compared 
to the other one.
 They are also space invariant, meaning that all the coordinates in the image 
will receive the same transformation.
 In other words, all the pixels in the output image will have the same area if 
placed over the input image.
-So transformations which require varying output pixel sizes like projections 
cannot be applied through this @mymath{2\times2} matrix either (for example for 
the tilted ACS and WFC3 camera detectors on board the Hubble space telescope).
+So transformations which require varying output pixel sizes like projections 
cannot be applied through this @mymath{2\times2} matrix either (for example, 
for the tilted ACS and WFC3 camera detectors on board the Hubble space 
telescope).
 
 @cindex M@"obius, August. F.
 @cindex Homogeneous coordinates
@@ -19364,7 +19365,7 @@ Note that since points are defined as row vectors 
there, the matrix is the trans
 @cindex Non-commutative operations
 @cindex Operations, non-commutative
 In @ref{Linear warping basics} we saw how a basic warp/transformation can be 
represented with a matrix.
-To make more complex warpings (for example to define a translation, rotation 
and scale as one warp) the individual matrices have to be multiplied through 
matrix multiplication.
+To make more complex warpings (for example, to define a translation, rotation 
and scale as one warp) the individual matrices have to be multiplied through 
matrix multiplication.
 However matrix multiplication is not commutative, so the order of the set of 
matrices you use for the multiplication is going to be very important.
 
 The first warping should be placed as the left-most matrix.
@@ -19424,9 +19425,9 @@ To increase the accuracy, you might also sample more 
than one point from within
 @cindex Edges, image
 However, interpolation has several problems.
 The first one is that it will depend on the type of function you want to 
assume for the interpolation.
-For example you can choose a bi-linear or bi-cubic (the `bi's are for the 2 
dimensional nature of the data) interpolation method.
+for example, you can choose a bi-linear or bi-cubic (the `bi's are for the 2 
dimensional nature of the data) interpolation method.
 For the latter there are various ways to set the constants@footnote{see 
@url{http://entropymine.com/imageworsener/bicubic/} for a nice introduction.}.
-Such parametric interpolation functions can fail seriously on the edges of an 
image, or when there is a sharp change in value (for example the bleeding 
saturation of bright stars in astronomical CCDs).
+Such parametric interpolation functions can fail seriously on the edges of an 
image, or when there is a sharp change in value (for example, the bleeding 
saturation of bright stars in astronomical CCDs).
 They will also need normalization so that the flux of the objects before and 
after the warping is comparable.
 
 The parametric nature of these methods adds a level of subjectivity to the 
data (it makes more assumptions through the functions than the data can handle).
@@ -19450,14 +19451,14 @@ Since it involves the mixing of the input's pixel 
values, this pixel mixing meth
 Therefore, after comparing the input and output, you will notice that the 
output is slightly smoothed, thus boosting the more diffuse signal, but 
creating correlated noise.
 In astronomical imaging the correlated noise will be decreased later when you 
stack many exposures.
 
-If there are very high spatial-frequency signals in the image (for example 
fringes) which vary on a scale @emph{smaller than} your output image pixel size 
(this is rarely the case in astronomical imaging), pixel mixing can cause 
ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
+If there are very high spatial-frequency signals in the image (for example, 
fringes) which vary on a scale @emph{smaller than} your output image pixel size 
(this is rarely the case in astronomical imaging), pixel mixing can cause 
ailiasing@footnote{@url{http://en.wikipedia.org/wiki/Aliasing}}.
 Therefore, in case such fringes are present, they have to be calculated and 
removed separately (which would naturally be done in any astronomical reduction 
pipeline).
 Because of the PSF, no astronomical target has a sharp change in their signal.
 Thus this issue is less important for astronomical applications, see @ref{PSF}.
 
 To find the overlap area of the output pixel over the input pixels, we need to 
define polygons and clip them (find the overlap).
 Usually, it is sufficient to define a pixel with a four-vertice polygon.
-However, when a non-linear distortion (for example @code{SIP} or @code{TPV}) 
is present and the distortion is significant over an output pixel's size 
(usually far from the reference point), the shadow of the output pixel on the 
input grid can be curved.
+However, when a non-linear distortion (for example, @code{SIP} or @code{TPV}) 
is present and the distortion is significant over an output pixel's size 
(usually far from the reference point), the shadow of the output pixel on the 
input grid can be curved.
 To account for such cases (which can only happen when correcting for 
non-linear distortions), Warp has the @option{--edgesampling} option to sample 
the output pixel over more vertices.
 For more, see the description of this option in @ref{Align pixels with WCS and 
remove distortions}.
 
@@ -19466,7 +19467,7 @@ For more, see the description of this option in 
@ref{Align pixels with WCS and r
 
 Warp an input image into a new pixel grid by pixel mixing (see 
@ref{Resampling}).
 Without any options, Warp will remove any non-linear distortions from the 
image and align the output pixel coordinates to its WCS coordinates.
-Any homographic warp (for example scaling, rotation, translation, projection, 
see @ref{Linear warping basics}) can also be done by calling the relevant 
option explicitly.
+Any homographic warp (for example, scaling, rotation, translation, projection, 
see @ref{Linear warping basics}) can also be done by calling the relevant 
option explicitly.
 The general template for invoking Warp is:
 
 @example
@@ -19739,7 +19740,7 @@ For example, you have a mock image that doesn't (yet!) 
have its mock WCS, and it
 In this scenario, you can use the @option{--scale} option to under-sample it 
to your desired resolution.
 This is similar to the @ref{Sufi simulates a detection} tutorial.
 
-Linear warps must be specified as command-line options, either as (possibly 
multiple) modular warpings (for example @option{--rotate}, or 
@option{--scale}), or directly as a single raw matrix (with @option{--matrix}).
+Linear warps must be specified as command-line options, either as (possibly 
multiple) modular warpings (for example, @option{--rotate}, or 
@option{--scale}), or directly as a single raw matrix (with @option{--matrix}).
 If specified together, the latter (direct matrix) will take precedence and all 
the modular warpings will be ignored.
 Any number of modular warpings can be specified on the command-line and 
configuration files.
 If more than one modular warping is given, all will be merged to create one 
warping matrix.
@@ -19757,8 +19758,8 @@ See the examples above as a demonstration.
 Based on the FITS standard, integer values are assigned to the center of a 
pixel and the coordinate [1.0, 1.0] is the center of the first pixel (bottom 
left of the image when viewed in SAO DS9).
 So the coordinate center [0.0, 0.0] is half a pixel away (in each axis) from 
the bottom left vertex of the first pixel.
 The resampling that is done in Warp (see @ref{Resampling}) is done on the 
coordinate axes and thus directly depends on the coordinate center.
-In some situations this if fine, for example when rotating/aligning a real 
image, all the edge pixels will be similarly affected.
-But in other situations (for example when scaling an over-sampled mock image 
to its intended resolution, this is not desired: you want the center of the 
coordinates to be on the corner of the pixel.
+In some situations this if fine, for example, when rotating/aligning a real 
image, all the edge pixels will be similarly affected.
+But in other situations (for example, when scaling an over-sampled mock image 
to its intended resolution, this is not desired: you want the center of the 
coordinates to be on the corner of the pixel.
 In such cases, you can use the @option{--centeroncorner} option which will 
shift the center by @mymath{0.5} before the main warp, then shift it back by 
@mymath{-0.5} after the main warp.
 
 @table @option
@@ -19818,7 +19819,7 @@ The transformation matrix can be either a 2 by 2 (4 
numbers), or a 3 by 3 (9 num
 In the former case (if a 2 by 2 matrix is given), then it is put into a 3 by 3 
matrix (see @ref{Linear warping basics}).
 
 @cindex NaN
-The determinant of the matrix has to be non-zero and it must not contain any 
non-number values (for example infinities or NaNs).
+The determinant of the matrix has to be non-zero and it must not contain any 
non-number values (for example, infinities or NaNs).
 The elements of the matrix have to be written row by row.
 So for the general Homography matrix of @ref{Linear warping basics}, it should 
be called with @command{--matrix=a,b,c,d,e,f,g,h,1}.
 
@@ -19835,7 +19836,7 @@ See the explanation above for coordinates in the FITS 
standard to better underst
 @cindex World Coordinate System
 Do not correct the WCS information of the input image and save it untouched to 
the output image.
 By default the WCS (World Coordinate System) information of the input image is 
going to be corrected in the output image so the objects in the image are at 
the same WCS coordinates.
-But in some cases it might be useful to keep it unchanged (for example to 
correct alignments).
+But in some cases it might be useful to keep it unchanged (for example, to 
correct alignments).
 @end table
 
 
@@ -19861,7 +19862,7 @@ But in some cases it might be useful to keep it 
unchanged (for example to correc
 @chapter Data analysis
 
 Astronomical datasets (images or tables) contain very valuable information, 
the tools in this section can help in analyzing, extracting, and quantifying 
that information.
-For example getting general or specific statistics of the dataset (with 
@ref{Statistics}), detecting signal within a noisy dataset (with 
@ref{NoiseChisel}), or creating a catalog from an input dataset (with 
@ref{MakeCatalog}).
+for example, getting general or specific statistics of the dataset (with 
@ref{Statistics}), detecting signal within a noisy dataset (with 
@ref{NoiseChisel}), or creating a catalog from an input dataset (with 
@ref{MakeCatalog}).
 
 @menu
 * Statistics::                  Calculate dataset statistics.
@@ -19923,7 +19924,7 @@ This occurs because on the horizontal axis, there is 
little change while on the
 Normalizing a cumulative frequency plot means to divide each index (y axis) by 
the total number of data points (or the last value).
 
 Unlike the histogram which has a limited number of bins, ideally the 
cumulative frequency plot should have one point for every data element.
-Even in small datasets (for example a @mymath{200\times200} image) this will 
result in an unreasonably large number of points to plot (40000)! As a result, 
for practical reasons, it is common to only store its value on a certain number 
of points (intervals) in the input range rather than the whole dataset, so you 
should determine the number of bins you want when asking for a cumulative 
frequency plot.
+Even in small datasets (for example, a @mymath{200\times200} image) this will 
result in an unreasonably large number of points to plot (40000)! As a result, 
for practical reasons, it is common to only store its value on a certain number 
of points (intervals) in the input range rather than the whole dataset, so you 
should determine the number of bins you want when asking for a cumulative 
frequency plot.
 In Gnuastro (and thus the Statistics program), the number reported for each 
bin is the total number of data points until the larger interval value for that 
bin.
 You can see an example histogram and cumulative frequency plot of a single 
dataset under the @option{--asciihist} and @option{--asciicfp} options of 
@ref{Invoking aststatistics}.
 
@@ -19940,14 +19941,14 @@ But when the range is determined from the actual 
dataset (none of these options
 In @ref{Histogram and Cumulative Frequency Plot} the concept of histograms 
were introduced on a single dataset.
 But they are only useful for viewing the distribution of a single variable 
(column in a table).
 In many contexts, the distribution of two variables in relation to each other 
may be of interest.
-For example the color-magnitude diagrams in astronomy, where the horizontal 
axis is the luminosity or magnitude of an object, and the vertical axis is the 
color.
+for example, the color-magnitude diagrams in astronomy, where the horizontal 
axis is the luminosity or magnitude of an object, and the vertical axis is the 
color.
 Scatter plots are useful to see these relations between the objects of 
interest when the number of the objects is small.
 
 As the density of points in the scatter plot increases, the points will fall 
over each other and just make a large connected region hide potentially 
interesting behaviors/correlations in the densest regions.
 This is where 2D histograms can become very useful.
 A 2D histogram is composed of 2D bins (boxes or pixels), just as a 1D 
histogram consists of 1D bins (lines).
 The number of points falling within each box/pixel will then be the value of 
that box.
-Added with a color-bar, you can now clearly see the distribution independent 
of the density of points (for example you can even visualize it in log-scale if 
you want).
+Added with a color-bar, you can now clearly see the distribution independent 
of the density of points (for example, you can even visualize it in log-scale 
if you want).
 
 Gnuastro's Statistics program has the @option{--histogram2d} option for this 
task.
 It takes a single argument (either @code{table} or @code{image}) that 
specifies the format of the output 2D histogram.
@@ -20067,7 +20068,7 @@ $ pdflatex --shell-escape --halt-on-error report.tex
 
 @noindent
 You can now open @file{report.pdf} to see your very high quality 2D histogram 
within your text.
-And if you need the plots separately (for example for slides), you can take 
the PDF inside the @file{tikz/} directory.
+And if you need the plots separately (for example, for slides), you can take 
the PDF inside the @file{tikz/} directory.
 
 @node 2D histogram as an image,  , 2D histogram as a table for plotting, 2D 
Histograms
 @subsubsection 2D histogram as an image
@@ -20075,12 +20076,12 @@ And if you need the plots separately (for example for 
slides), you can take the
 When called with the @option{--histogram=image} option, Statistics will output 
a FITS file with an image/array extension.
 If you asked for @option{--numbins=N} and @option{--numbins2=M} the image will 
have a size of @mymath{N\times M} pixels (one pixel per 2D bin).
 Also, the FITS image will have a linear WCS that is scaled to the 2D bin size 
along each dimension.
-So when you hover your mouse over any part of the image with a FITS viewer 
(for example SAO DS9), besides the number of points in each pixel, you can 
directly also see ``coordinates'' of the pixels along the two axes.
+So when you hover your mouse over any part of the image with a FITS viewer 
(for example, SAO DS9), besides the number of points in each pixel, you can 
directly also see ``coordinates'' of the pixels along the two axes.
 You can also use the optimized and fast FITS viewer features for many aspects 
of visually inspecting the distributions (which we will not go into further).
 
 @cindex Color-magnitude diagram
 @cindex Diagram, Color-magnitude
-For example let's assume you want to derive the color-magnitude diagram (CMD) 
of the @url{http://uvudf.ipac.caltech.edu, UVUDF survey}.
+for example, let's assume you want to derive the color-magnitude diagram (CMD) 
of the @url{http://uvudf.ipac.caltech.edu, UVUDF survey}.
 You can run the first command below to download the table with magnitudes of 
objects in many filters and run the second command to see general column 
metadata after it is downloaded.
 
 @example
@@ -20112,7 +20113,7 @@ aststatistics cmd.fits -cMAG_F775W,F606W-F775W 
--histogram2d=image \
 @end example
 
 @noindent
-If you have SAO DS9, you can now open this FITS file as a normal FITS image, 
for example with the command below.
+If you have SAO DS9, you can now open this FITS file as a normal FITS image, 
for example, with the command below.
 Try hovering/zooming over the pixels: not only will you see the number of 
objects in the UVUDF catalog that fall in each bin, but you also see the 
@code{F775W} magnitude and color of that pixel also.
 
 @example
@@ -20140,8 +20141,8 @@ $ ds9 cmd-2d-hist.fits -cmap sls -zoom 4 -grid yes \
 This is good for a fast progress update.
 But for your paper or more official report, you want to show something with 
higher quality.
 For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same 
font as your text, sharp grids and many other elegant/powerful features (like 
over-plotting interesting points, lines and etc).
-But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example PDF.
-We Will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
+But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example, PDF.
+We will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
 
 @example
 $ astconvertt cmd-2d-hist.fits --colormap=sls-inverse \
@@ -20502,15 +20503,15 @@ For a description of the outlier rejection methods, 
see the @url{https://www.gnu
 One of the most important aspects of a dataset is its reference value: the 
value of the dataset where there is no signal.
 Without knowing, and thus removing the effect of, this value it is impossible 
to compare the derived results of many high-level analyses over the dataset 
with other datasets (in the attempt to associate our results with the ``real'' 
world).
 
-In astronomy, this reference value is known as the ``Sky'' value: the value 
that noise fluctuates around: where there is no signal from detectable objects 
or artifacts (for example galaxies, stars, planets or comets, star spikes or 
internal optical ghost).
+In astronomy, this reference value is known as the ``Sky'' value: the value 
that noise fluctuates around: where there is no signal from detectable objects 
or artifacts (for example, galaxies, stars, planets or comets, star spikes or 
internal optical ghost).
 Depending on the dataset, the Sky value maybe a fixed value over the whole 
dataset, or it may vary based on location.
 For an example of the latter case, see Figure 11 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
 
 Because of the significance of the Sky value in astronomical data analysis, we 
have devoted this subsection to it for a thorough review.
 We start with a thorough discussion on its definition (@ref{Sky value 
definition}).
 In the astronomical literature, researchers use a variety of methods to 
estimate the Sky value, so in @ref{Sky value misconceptions}) we review those 
and discuss their biases.
-From the definition of the Sky value, the most accurate way to estimate the 
Sky value is to run a detection algorithm (for example @ref{NoiseChisel}) over 
the dataset and use the undetected pixels.
-However, there is also a more crude method that maybe useful when good direct 
detection is not initially possible (for example due to too many cosmic rays in 
a shallow image).
+From the definition of the Sky value, the most accurate way to estimate the 
Sky value is to run a detection algorithm (for example, @ref{NoiseChisel}) over 
the dataset and use the undetected pixels.
+However, there is also a more crude method that maybe useful when good direct 
detection is not initially possible (for example, due to too many cosmic rays 
in a shallow image).
 A more crude (but simpler method) that is usable in such situations is 
discussed in @ref{Quantifying signal in a tile}.
 
 @menu
@@ -20550,7 +20551,7 @@ The total flux in this pixel (@mymath{T_i}) can thus be 
written as:
 @cindex Cosmic ray removal
 @noindent
 By definition, @mymath{D_i} is detected and it can be assumed that it is 
correctly estimated (deblended) and subtracted, we can thus set @mymath{D_i=0}.
-There are also methods to detect and remove cosmic rays, for example the 
method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
+There are also methods to detect and remove cosmic rays, for example, the 
method described in van Dokkum (2001)@footnote{van Dokkum, P. G. (2001).
 Publications of the Astronomical Society of the Pacific. 113, 1420.}, or by 
comparing multiple exposures.
 This allows us to set @mymath{C_i=0}.
 Note that in practice, @mymath{D_i} and @mymath{U_i} are correlated, because 
they both directly depend on the detection algorithm and its input parameters.
@@ -20563,7 +20564,7 @@ With these limitations in mind, the observed Sky value 
for this pixel (@mymath{S
 @noindent
 Therefore, as the detection process (algorithm and input parameters) becomes 
more accurate, or @mymath{U_i\to0}, the Sky value will tend to the background 
value or @mymath{S_i\to B_i}.
 Hence, we see that while @mymath{B_i} is an inherent property of the data 
(pixel in an image), @mymath{S_i} depends on the detection process.
-Over a group of pixels, for example in an image or part of an image, this 
equation translates to the average of undetected pixels 
(Sky@mymath{=\sum{S_i}}).
+Over a group of pixels, for example, in an image or part of an image, this 
equation translates to the average of undetected pixels 
(Sky@mymath{=\sum{S_i}}).
 With this definition of Sky, the object flux in the data can be calculated, 
per pixel, with
 
 @dispmath{ T_{i}=S_{i}+O_{i} \quad\rightarrow\quad
@@ -20629,9 +20630,9 @@ Therefore all such approaches that try to approximate 
the sky value prior to det
 @subsubsection Quantifying signal in a tile
 
 In order to define detection thresholds on the image, or calibrate it for 
measurements (subtract the signal of the background sky and define errors), we 
need some basic measurements.
-For example the quantile threshold in NoiseChisel (@option{--qthresh} option), 
or the mean of the undetected regions (Sky) and the Sky standard deviation (Sky 
STD) which are the output of NoiseChisel and Statistics.
+for example, the quantile threshold in NoiseChisel (@option{--qthresh} 
option), or the mean of the undetected regions (Sky) and the Sky standard 
deviation (Sky STD) which are the output of NoiseChisel and Statistics.
 But astronomical images will contain a lot of stars and galaxies that will 
bias those measurements if not properly accounted for.
-Quantifying where signal is present is thus a very important step in the usage 
of a dataset: for example if the Sky level is over-estimated, your target 
object's brightness will be under-estimated.
+Quantifying where signal is present is thus a very important step in the usage 
of a dataset: for example, if the Sky level is over-estimated, your target 
object's brightness will be under-estimated.
 
 @cindex Data
 @cindex Noise
@@ -20644,7 +20645,7 @@ In astronomical images, signal is mostly photons coming 
of a star or galaxy, and
 Noise is mainly due to counting errors in the detector electronics upon data 
collection.
 @emph{Data} is defined as the combination of signal and noise (so a noisy 
image of a galaxy is one @emph{data}set).
 
-When a dataset does not have any signal (for example you take an image with a 
closed shutter, producing an image that only contains noise), the mean, median 
and mode of the distribution are equal within statistical errors.
+When a dataset does not have any signal (for example, you take an image with a 
closed shutter, producing an image that only contains noise), the mean, median 
and mode of the distribution are equal within statistical errors.
 Signal from emitting objects, like astronomical targets, always has a positive 
value and will never become negative, see Figure 1 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
 Therefore, when signal is added to the data (you take an image with an open 
shutter pointing to a galaxy for example), the mean, median and mode of the 
dataset shift to the positive, creating a positively skewed distribution.
 The shift of the mean is the largest.
@@ -20660,12 +20661,12 @@ This definition of skewness through the quantile of 
the mean is further introduc
 
 @cindex Signal-to-noise ratio
 However, in an astronomical image, some of the pixels will contain more signal 
than the rest, so we cannot simply check @mymath{q_{mean}} on the whole dataset.
-For example if we only look at the patch of pixels that are placed under the 
central parts of the brightest stars in the field of view, @mymath{q_{mean}} 
will be very high.
-The signal in other parts of the image will be weaker, and in some parts it 
will be much smaller than the noise (for example 1/100-th of the noise level).
+for example, if we only look at the patch of pixels that are placed under the 
central parts of the brightest stars in the field of view, @mymath{q_{mean}} 
will be very high.
+The signal in other parts of the image will be weaker, and in some parts it 
will be much smaller than the noise (for example, 1/100-th of the noise level).
 When the signal-to-noise ratio is very small, we can generally assume no 
signal (because its effectively impossible to measure it) and @mymath{q_{mean}} 
will be approximately 0.5.
 
 To address this problem, we break the image into a grid of tiles@footnote{The 
options to customize the tessellation are discussed in @ref{Processing 
options}.} (see @ref{Tessellation}).
-For example a tile can be a square box of size @mymath{30\times30} pixels.
+for example, a tile can be a square box of size @mymath{30\times30} pixels.
 By measuring @mymath{q_{mean}} on each tile, we can find which tiles that 
contain significant signal and ignore them.
 Technically, if a tile's @mymath{|q_{mean}-0.5|} is larger than the value 
given to the @option{--meanmedqdiff} option, that tile will be ignored for the 
next steps.
 You can read this option as ``mean-median-quantile-difference''.
@@ -20681,7 +20682,7 @@ Therefore, to obtain an even better measure of the 
presence of signal in a tile,
 There is one final hurdle: raw astronomical datasets are commonly peppered 
with Cosmic rays.
 Images of Cosmic rays are not smoothed by the atmosphere or telescope 
aperture, so they have sharp boundaries.
 Also, since they do not occupy too many pixels, they do not affect the mode 
and median calculation.
-But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
+But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example, see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
 The effect of outliers like cosmic rays on the mean and standard deviation can 
be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a 
complete explanation.
 
 Therefore, after asserting that the mean and median are approximately equal in 
a tile (see @ref{Tessellation}), the Sky and its STD are measured on each tile 
after @mymath{\sigma}-clipping with the @option{--sigmaclip} option (see 
@ref{Sigma clipping}).
@@ -20691,7 +20692,7 @@ But we need a measurement over each tile (and thus 
pixel).
 We will therefore use interpolation to assign a value to the NaN tiles.
 
 However, prior to interpolating over the failed tiles, another point should be 
considered: large and extended galaxies, or bright stars, have wings which sink 
into the noise very gradually.
-In some cases, the gradient over these wings can be on scales that is larger 
than the tiles (for example the pixel value changes by @mymath{0.1\sigma} after 
100 pixels, but the tile has a width of 30 pixels).
+In some cases, the gradient over these wings can be on scales that is larger 
than the tiles (for example, the pixel value changes by @mymath{0.1\sigma} 
after 100 pixels, but the tile has a width of 30 pixels).
 
 In such cases, the @mymath{q_{mean}} test will be successful, even though 
there is signal.
 Recall that @mymath{q_{mean}} is a measure of skewness.
@@ -20739,7 +20740,7 @@ You can set @mymath{N_{ngb}} through the 
@option{--interpnumngb} option.
 Once all the tiles are given a value, a smoothing step is implemented to 
remove the sharp value contrast that can happen on the edges of tiles.
 The size of the smoothing box is set with the @option{--smoothwidth} option.
 
-As mentioned above, the process above is used for any of the basic 
measurements (for example identifying the quantile-based thresholds in 
NoiseChisel, or the Sky value in Statistics).
+As mentioned above, the process above is used for any of the basic 
measurements (for example, identifying the quantile-based thresholds in 
NoiseChisel, or the Sky value in Statistics).
 You can use the check-image feature of NoiseChisel or Statistics to inspect 
the steps and visually see each step (all the options that start with 
@option{--check}).
 For example, as mentioned in the @ref{NoiseChisel optimization} tutorial, when 
given a dataset from a new instrument (with differing noise properties), we 
highly recommend to use @option{--checkqthresh} in your first call and visually 
inspect how the parameters above affect the final quantile threshold (e.g., 
have the wings of bright sources leaked into the threshold?).
 The same goes for the @option{--checksky} option of Statistics or NoiseChisel.
@@ -20851,7 +20852,7 @@ Histogram:
 @end example
 
 Gnuastro's Statistics is a very general purpose program, so to be able to 
easily understand this diversity in its operations (and how to possibly run 
them together), we will divided the operations into two types: those that do 
not respect the position of the elements and those that do (by tessellating the 
input on a tile grid, see @ref{Tessellation}).
-The former treat the whole dataset as one and can re-arrange all the elements 
(for example sort them), but the former do their processing on each tile 
independently.
+The former treat the whole dataset as one and can re-arrange all the elements 
(for example, sort them), but the former do their processing on each tile 
independently.
 First, we will review the operations that work on the whole dataset.
 
 @cindex AWK
@@ -21058,7 +21059,7 @@ Now you can use Table with the @option{--range} option 
to only print the rows th
 $ asttable table.fits --range=COLUMN,$r
 @end example
 
-To save the resulting table (that is clean of outliers) in another file (for 
example named @file{cleaned.fits}, it can also have a @file{.txt} suffix), just 
add @option{--output=cleaned.fits} to the command above.
+To save the resulting table (that is clean of outliers) in another file (for 
example, named @file{cleaned.fits}, it can also have a @file{.txt} suffix), 
just add @option{--output=cleaned.fits} to the command above.
 
 
 @item --sigclip-mean
@@ -21198,13 +21199,13 @@ For a histogram, the sum of all bins will become one 
and for a cumulative freque
 
 @item --maxbinone
 Divide all the histogram values by the maximum bin value so it becomes one and 
the rest are similarly scaled.
-In some situations (for example if you want to plot the histogram and 
cumulative frequency plot in one plot) this can be very useful.
+In some situations (for example, if you want to plot the histogram and 
cumulative frequency plot in one plot) this can be very useful.
 
 @item --onebinstart=FLT
 Make sure that one bin starts with the value to this option.
 In practice, this will shift the bins used to find the histogram and 
cumulative frequency plot such that one bin's lower interval becomes this value.
 
-For example when a histogram range includes negative and positive values and 
zero has a special significance in your analysis, then zero might fall 
somewhere in one bin.
+for example, when a histogram range includes negative and positive values and 
zero has a special significance in your analysis, then zero might fall 
somewhere in one bin.
 As a result that bin will have counts of positive and negative.
 By setting @option{--onebinstart=0}, you can make sure that one bin will only 
count negative values in the vicinity of zero and the next bin will only count 
positive ones in that vicinity.
 
@@ -21259,7 +21260,7 @@ The second input column is assumed to be the measured 
value (on the vertical axi
 @item
 The third input column is only for fittings with a weight.
 It is assumed to be the ``weight'' of the measurement column.
-The nature of the ``weight'' can be set with the @option{--fitweight} option, 
for example if you have the standard deviation of the error in @mymath{Y}, you 
can use @option{--fitweight=std} (which is the default, so unless the default 
value has been changed, you will not need to set this).
+The nature of the ``weight'' can be set with the @option{--fitweight} option, 
for example, if you have the standard deviation of the error in @mymath{Y}, you 
can use @option{--fitweight=std} (which is the default, so unless the default 
value has been changed, you will not need to set this).
 @end enumerate
 
 If three columns are given to a model without weight, or two columns are given 
to a model that requires weights, Statistics will abort and inform you.
@@ -21274,8 +21275,8 @@ The output of the fitting can be in three modes listed 
below.
 For a complete example, see the tutorial in @ref{Least squares fitting}).
 @table @asis
 @item Human friendly format
-By default (for example the commands above) the output is an elaborate 
description of the model parameters.
-For example @mymath{c_0} and @mymath{c_1} in the linear model 
(@mymath{Y=c_0+c_1X}).
+By default (for example, the commands above) the output is an elaborate 
description of the model parameters.
+for example, @mymath{c_0} and @mymath{c_1} in the linear model 
(@mymath{Y=c_0+c_1X}).
 Their covariance matrix and the reduced @mymath{\chi^2} of the fit are also 
printed on the output.
 @item Raw numbers
 If you don't need the human friendly components of the output (which are 
annoying when you want to parse the outputs in some scenarios), you can use 
@option{--quiet} option.
@@ -21320,7 +21321,7 @@ The nature of the ``weight'' column (when a weight is 
necessary for the model).
 It can take one of the following values:
 @table @code
 @item std
-Standard deviation of each @mymath{Y} axis measurement: this is the usual 
``error'' associated with a measurement (for example in @ref{MakeCatalog}) and 
is the default value to this option.
+Standard deviation of each @mymath{Y} axis measurement: this is the usual 
``error'' associated with a measurement (for example, in @ref{MakeCatalog}) and 
is the default value to this option.
 @item var
 Variance of each @mymath{Y} axis measurement.
 Assuming a Gaussian distribution with standard deviation @mymath{\sigma}, the 
variance is @mymath{\sigma^2}.
@@ -21434,7 +21435,7 @@ Note that currently, this is a very crude/simple 
implementation, please let us k
 
 All the options described until now were from the first class of operations 
discussed above: those that treat the whole dataset as one.
 However, it often happens that the relative position of the dataset elements 
over the dataset is significant.
-For example you do not want one median value for the whole input image, you 
want to know how the median changes over the image.
+for example, you do not want one median value for the whole input image, you 
want to know how the median changes over the image.
 For such operations, the input has to be tessellated (see @ref{Tessellation}).
 Thus this class of options cannot currently be called along with the options 
above in one run of Statistics.
 
@@ -21443,7 +21444,7 @@ Thus this class of options cannot currently be called 
along with the options abo
 @item -t
 @itemx --ontile
 Do the respective single-valued calculation over one tile of the input 
dataset, not the whole dataset.
-This option must be called with at least one of the single valued options 
discussed above (for example @option{--mean} or @option{--quantile}).
+This option must be called with at least one of the single valued options 
discussed above (for example, @option{--mean} or @option{--quantile}).
 The output will be a file in the same format as the input.
 If the @option{--oneelempertile} option is called, then one element/pixel will 
be used for each tile (see @ref{Processing options}).
 Otherwise, the output will have the same size as the input, but each element 
will have the value corresponding to that tile's value.
@@ -21474,12 +21475,12 @@ Kernel HDU to help in estimating the significance of 
signal in a tile, see
 @item --meanmedqdiff=FLT
 The maximum acceptable distance between the quantiles of the mean and median, 
see @ref{Quantifying signal in a tile}.
 The initial Sky and its standard deviation estimates are measured on tiles 
where the quantiles of their mean and median are less distant than the value 
given to this option.
-For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
+for example, @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --sclipparams=FLT,FLT
 The @mymath{\sigma}-clipping parameters, see @ref{Sigma clipping}.
 This option takes two values which are separated by a comma (@key{,}).
-Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example, @code{3,1/10}).
 The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
 The second value is the exit criteria.
 If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is a specific number.
@@ -21507,11 +21508,11 @@ It is thus good to smooth the interpolated image so 
strong discontinuities do no
 The smoothing is done through convolution (see @ref{Convolution process}) with 
a flat kernel, so the value to this option must be an odd number.
 
 @item --ignoreblankintiles
-Do Not set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+Do Not set the input's blank pixels to blank in the tiled outputs (for 
example, Sky and Sky standard deviation extensions of the output).
 This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} is not called.
 
 By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
-But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
+But in other scenarios this default behavior is not desired: for example, if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item --checksky
 Create a multi-extension FITS file showing the steps that were used to 
estimate the Sky value over the input, see @ref{Quantifying signal in a tile}.
@@ -21527,11 +21528,11 @@ The file will have two extensions for each step (one 
for the Sky and one for the
 @cindex Segmentation
 Once instrumental signatures are removed from the raw data (image) in the 
initial reduction process (see @ref{Data manipulation}).
 You are naturally eager to start answering the scientific questions that 
motivated the data collection in the first place.
-However, the raw dataset/image is just an array of values/pixels, that is all! 
These raw values cannot directly be used to answer your scientific questions: 
for example ``how many galaxies are there in the image?'', ``What is their 
brightness?'' and etc.
+However, the raw dataset/image is just an array of values/pixels, that is all! 
These raw values cannot directly be used to answer your scientific questions: 
for example, ``how many galaxies are there in the image?'', ``What is their 
brightness?'' and etc.
 
 The first high-level step your analysis will therefore be to classify, or 
label, the dataset elements (pixels) into two classes:
 1) Noise, where random effects are the major contributor to the value, and
-2) Signal, where non-random factors (for example light from a distant galaxy) 
are present.
+2) Signal, where non-random factors (for example, light from a distant galaxy) 
are present.
 This classification of the elements in a dataset is formally known as 
@emph{detection}.
 
 In an observational/experimental dataset, signal is always buried in noise: 
only mock/simulated datasets are free of noise.
@@ -21542,7 +21543,7 @@ Therefore when noise is significant, proper detection 
of your targets is a uniqu
 
 @cindex Erosion
 NoiseChisel is Gnuastro's program for detection of targets that do not have a 
sharp border (almost all astronomical objects).
-When the targets have sharp edges/borders (for example cells in biological 
imaging), a simple threshold is enough to separate them from noise and each 
other (if they are not touching).
+When the targets have sharp edges/borders (for example, cells in biological 
imaging), a simple threshold is enough to separate them from noise and each 
other (if they are not touching).
 To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program 
in a command like below (assuming the threshold is @code{100}, see 
@ref{Arithmetic}):
 
 @example
@@ -21566,7 +21567,7 @@ NoiseChisel's primary output is a binary detection map 
with the same size as the
 Pixels that do not harbor any detected signal (noise) are given a label (or 
value) of zero and those with a value of 1 have been identified as hosting 
signal.
 
 Segmentation is the process of classifying the signal into higher-level 
constructs.
-For example if you have two separate galaxies in one image, NoiseChisel will 
give a value of 1 to the pixels of both (each forming an ``island'' of touching 
foreground pixels).
+for example, if you have two separate galaxies in one image, NoiseChisel will 
give a value of 1 to the pixels of both (each forming an ``island'' of touching 
foreground pixels).
 After segmentation, the connected foreground pixels will get separate labels, 
enabling you to study them individually.
 NoiseChisel is only focused on detection (separating signal from noise), to 
@emph{segment} the signal (into separate galaxies for example), Gnuastro has a 
separate specialized program @ref{Segment}.
 NoiseChisel's output can be directly/readily fed into Segment.
@@ -21621,7 +21622,7 @@ For a more detailed list of changes in each Gnuastro 
version, please see the @fi
 @item
 An improved outlier rejection for identifying tiles without any signal has 
been implemented in the quantile-threshold phase:
 Prior to version 0.14, outliers were defined globally: the distribution of all 
tiles with an acceptable @option{--meanmedqdiff} was inspected and outliers 
were found and rejected.
-However, this caused problems when there are strong gradients over the image 
(for example an image prior to flat-fielding, or in the presence of a large 
foreground galaxy).
+However, this caused problems when there are strong gradients over the image 
(for example, an image prior to flat-fielding, or in the presence of a large 
foreground galaxy).
 In these cases, the faint wings of galaxies/stars could be mistakenly 
identified as Sky (leaving a footprint of the object on the Sky output) and 
wrongly subtracted.
 
 It was possible to play with the parameters to correct this for that 
particular dataset, but that was frustrating.
@@ -21662,7 +21663,7 @@ $ astnoisechisel --numchannels=4,1 --tilesize=100,100 
input.fits
 
 @cindex Gaussian
 @noindent
-If NoiseChisel is to do processing (for example you do not want to get help, 
or see the values to each input parameter), an input image should be provided 
with the recognized extensions (see @ref{Arguments}).
+If NoiseChisel is to do processing (for example, you do not want to get help, 
or see the values to each input parameter), an input image should be provided 
with the recognized extensions (see @ref{Arguments}).
 NoiseChisel shares a large set of common operations with other Gnuastro 
programs, mainly regarding input/output, general processing steps, and general 
operating modes.
 To help in a unified experience between all of Gnuastro's programs, these 
operations have the same command-line options, see @ref{Common options} for a 
full list/description (they are not repeated here).
 
@@ -21719,7 +21720,7 @@ $ astnoisechisel cube.fits                              
        \
 @cindex Alias (shell)
 @cindex Shell startup
 @cindex Startup, shell
-To further simplify the process, you can define a shell alias in any startup 
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
+To further simplify the process, you can define a shell alias in any startup 
file (for example, @file{~/.bashrc}, see @ref{Installation directory}).
 Assuming that you installed Gnuastro in @file{/usr/local}, you can add this 
line to the startup file (you may put it all in one line, it is broken into two 
lines here for fitting within page limits).
 
 @example
@@ -21779,7 +21780,7 @@ If not called, for a 2D, image a 2D Gaussian profile 
with a FWHM of 2 pixels tru
 This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi 
and Ichikawa [2015].
 
 For a 3D cube, when no file name is given to @option{--kernel}, a Gaussian 
with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the 
third dimension will be used.
-The reason for this particular configuration is that commonly in astronomical 
applications, 3D datasets do not have the same nature in all three dimensions, 
commonly the first two dimensions are spatial (RA and Dec) while the third is 
spectral (for example wavelength).
+The reason for this particular configuration is that commonly in astronomical 
applications, 3D datasets do not have the same nature in all three dimensions, 
commonly the first two dimensions are spatial (RA and Dec) while the third is 
spectral (for example, wavelength).
 The samplings are also different, in the default case, the spatial sampling is 
assumed to be larger than the spectral sampling, hence a wider FWHM in the 
spatial directions, see @ref{Sampling theorem}.
 
 You can use MakeProfiles to build a kernel with any of its recognized profile 
types and parameters.
@@ -21801,7 +21802,7 @@ option.
 Use this file as the convolved image and do not do convolution (ignore 
@option{--kernel}).
 NoiseChisel will just check the size of the given dataset is the same as the 
input's size.
 If a wrong image (with the same size) is given to this option, the results 
(errors, bugs, etc) are unpredictable.
-So please use this option with care and in a highly controlled environment, 
for example in the scenario discussed below.
+So please use this option with care and in a highly controlled environment, 
for example, in the scenario discussed below.
 
 In almost all situations, as the input gets larger, the single most CPU (and 
time) consuming step in NoiseChisel (and other programs that need a convolved 
image) is convolution.
 Therefore minimizing the number of convolutions can save a significant amount 
of time in some scenarios.
@@ -21877,7 +21878,7 @@ Recall that you can always see the full list of 
NoiseChisel's options with the @
 @itemx --meanmedqdiff=FLT
 The maximum acceptable distance between the quantiles of the mean and median 
in each tile, see @ref{Quantifying signal in a tile}.
 The quantile threshold estimates are measured on tiles where the quantiles of 
their mean and median are less distant than the value given to this option.
-For example @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
+for example, @option{--meanmedqdiff=0.01} means that only tiles where the 
mean's quantile is between 0.49 and 0.51 (recall that the median's quantile is 
0.5) will be used.
 
 @item --outliersclip=FLT,FLT
 @mymath{\sigma}-clipping parameters for the outlier rejection of the quantile 
threshold.
@@ -22073,7 +22074,7 @@ By default the output will have the same pixel size as 
the input, but with the @
 The @mymath{\sigma}-clipping parameters for measuring the initial and final 
Sky values from the undetected pixels, see @ref{Sigma clipping}.
 
 This option takes two values which are separated by a comma (@key{,}).
-Each value can either be written as a single number or as a fraction of two 
numbers (for example @code{3,1/10}).
+Each value can either be written as a single number or as a fraction of two 
numbers (for example, @code{3,1/10}).
 The first value to this option is the multiple of @mymath{\sigma} that will be 
clipped (@mymath{\alpha} in that section).
 The second value is the exit criteria.
 If it is less than 1, then it is interpreted as tolerance and if it is larger 
than one it is assumed to be the fixed number of iterations.
@@ -22097,7 +22098,7 @@ The stronger the connectivity, the more smaller regions 
will be discarded.
 
 @item --holengb=INT
 The connectivity (defined by the number of neighbors) to fill holes after 
applying @option{--dthresh} (above) to find pseudo-detections.
-For example in a 2D image it must be 4 (the neighbors that are most strongly 
connected) or 8 (all neighbors).
+for example, in a 2D image it must be 4 (the neighbors that are most strongly 
connected) or 8 (all neighbors).
 The stronger the connectivity, the stronger the hole will be enclosed.
 So setting a value of 8 in a 2D image means that the walls of the hole are 
4-connected.
 If standard (near Sky level) values are given to @option{--dthresh}, setting 
@option{--holengb=4}, might fill the complete dataset and thus not create 
enough pseudo-detections.
@@ -22120,13 +22121,13 @@ If it is plain text, a separate file will be made for 
each table (ending in @fil
 For more on @option{--tableformat} see @ref{Input output options}.
 
 You can use these to inspect the S/N values and their distribution (in 
combination with the @option{--checkdetection} option to see where the 
pseudo-detections are).
-You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution or any other analysis you would like for better understanding of 
the distribution (for example through a histogram).
+You can use Gnuastro's @ref{Statistics} to make a histogram of the 
distribution or any other analysis you would like for better understanding of 
the distribution (for example, through a histogram).
 
 @item --minnumfalse=INT
 The minimum number of `pseudo-detections' over the undetected regions to 
identify a Signal-to-Noise ratio threshold.
 The Signal to noise ratio (S/N) of false pseudo-detections in each tile is 
found using the quantile of the S/N distribution of the pseudo-detections over 
the undetected pixels in each mesh.
 If the number of S/N measurements is not large enough, the quantile will not 
be accurate (can have large scatter).
-For example if you set @option{--snquant=0.99} (or the top 1 percent), then it 
is best to have at least 100 S/N measurements.
+for example, if you set @option{--snquant=0.99} (or the top 1 percent), then 
it is best to have at least 100 S/N measurements.
 
 @item -c FLT
 @itemx --snquant=FLT
@@ -22163,7 +22164,7 @@ are 27 neighbors.
 The maximum hole size to fill during the final expansion of the true 
detections as described in @option{--detgrowquant}.
 This is necessary when the input contains many smaller objects and can be used 
to avoid marking blank sky regions as detections.
 
-For example multiple galaxies can be positioned such that they surround an 
empty region of sky.
+for example, multiple galaxies can be positioned such that they surround an 
empty region of sky.
 If all the holes are filled, the Sky region in between them will be taken as a 
detection which is not desired.
 To avoid such cases, the integer given to this option must be smaller than the 
hole between such objects.
 However, we should caution that unless the ``hole'' is very large, the 
combined faint wings of the galaxies might actually be present in between them, 
so be very careful in not filling such holes.
@@ -22245,11 +22246,11 @@ To encourage easier experimentation with the option 
values, when you use any of
 With @option{--continueaftercheck} option, you can disable this behavior and 
ask NoiseChisel to continue with the rest of the processing, even after the 
requested check files are complete.
 
 @item --ignoreblankintiles
-Do Not set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+Do Not set the input's blank pixels to blank in the tiled outputs (for 
example, Sky and Sky standard deviation extensions of the output).
 This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} is not called.
 
 By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
-But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
+But in other scenarios this default behavior is not desired: for example, if 
you have masked something in the input, but want the tiled output under that 
also.
 
 @item -l
 @itemx --label
@@ -22267,7 +22268,7 @@ Since the default output is just a binary dataset, an 
8-bit unsigned dataset is
 
 The binary output will also encourage users to segment the result separately 
prior to doing higher-level analysis.
 As an alternative to @option{--label}, if you have the binary detection image, 
you can use the @code{connected-components} operator in Gnuastro's Arithmetic 
program to identify regions that are connected with each other.
-For example with this command (assuming NoiseChisel's output is called 
@file{nc.fits}):
+for example, with this command (assuming NoiseChisel's output is called 
@file{nc.fits}):
 
 @example
 $ astarithmetic nc.fits 2 connected-components -hDETECTIONS
@@ -22322,8 +22323,8 @@ See @ref{NoiseChisel optimization for storage} for an 
example on a real dataset.
 @node Segment, MakeCatalog, NoiseChisel, Data analysis
 @section Segment
 
-Once signal is separated from noise (for example with @ref{NoiseChisel}), you 
have a binary dataset: each pixel is either signal (1) or noise (0).
-Signal (for example every galaxy in your image) has been ``detected'', but all 
detections have a label of 1.
+Once signal is separated from noise (for example, with @ref{NoiseChisel}), you 
have a binary dataset: each pixel is either signal (1) or noise (0).
+Signal (for example, every galaxy in your image) has been ``detected'', but 
all detections have a label of 1.
 Therefore while we know which pixels contain signal, we still cannot find out 
how many galaxies they contain or which detected pixels correspond to which 
galaxy.
 At the lowest (most generic) level, detection is a kind of segmentation 
(segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
 Here, we will define segmentation only on signal: to separate sub-structure 
within the detections.
@@ -22346,7 +22347,7 @@ like below:
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-However, in most astronomical situations our targets are not nicely separated 
or have a sharp boundary/edge (for a threshold to suffice): they touch (for 
example merging galaxies), or are simply in the same line-of-sight (which is 
much more common).
+However, in most astronomical situations our targets are not nicely separated 
or have a sharp boundary/edge (for a threshold to suffice): they touch (for 
example, merging galaxies), or are simply in the same line-of-sight (which is 
much more common).
 This causes their images to overlap.
 
 In particular, when you do your detection with NoiseChisel, you will detect 
signal to very low surface brightness limits: deep into the faint wings of 
galaxies or bright stars (which can extend very far and irregularly from their 
center).
@@ -22375,7 +22376,7 @@ To start learning about Segment, especially in relation 
to detection (@ref{Noise
 If you have used Segment within your research, please run it with 
@option{--cite} to list the papers you should cite and how to acknowledge its 
funding sources.
 
 Those papers cannot be updated any more but the software will evolve.
-For example Segment became a separate program (from NoiseChisel) in 2018 
(after those papers were published).
+for example, Segment became a separate program (from NoiseChisel) in 2018 
(after those papers were published).
 Therefore this book is the definitive reference.
 @c To help in the transition from those papers to the software you are using, 
see @ref{Segment changes after publication}.
 Finally, in @ref{Invoking astsegment}, we will discuss Segment's inputs, 
outputs and configuration options.
@@ -22401,7 +22402,7 @@ Finally, in @ref{Invoking astsegment}, we will discuss 
Segment's inputs, outputs
 @subsection Invoking Segment
 
 Segment will identify substructure within the detected regions of an input 
image.
-Segment's output labels can be directly used for measurements (for example 
with @ref{MakeCatalog}).
+Segment's output labels can be directly used for measurements (for example, 
with @ref{MakeCatalog}).
 The executable name is @file{astsegment} with the following general template
 
 @example
@@ -22424,14 +22425,14 @@ $ astsegment input.fits --snquant=0.9 
--checksegmentaion
 
 ## Use the fixed value of 0.01 for the input's Sky standard deviation
 ## (in the units of the input), and assume all the pixels are a
-## detection (for example a large structure extending over the whole
+## detection (for example, a large structure extending over the whole
 ## image), and only keep clumps with S/N>10 as true clumps.
 $ astsegment in.fits --std=0.01 --detection=all --clumpsnthresh=10
 @end example
 
 @cindex Gaussian
 @noindent
-If Segment is to do processing (for example you do not want to get help, or 
see the values of each option), at least one input dataset is necessary along 
with detection and error information, either as separate datasets (per-pixel) 
or fixed values, see @ref{Segment input}.
+If Segment is to do processing (for example, you do not want to get help, or 
see the values of each option), at least one input dataset is necessary along 
with detection and error information, either as separate datasets (per-pixel) 
or fixed values, see @ref{Segment input}.
 Segment shares a large set of common operations with other Gnuastro programs, 
mainly regarding input/output, general processing steps, and general operating 
modes.
 To help in a unified experience between all of Gnuastro's programs, these 
common operations have the same names and defined in @ref{Common options}.
 
@@ -22453,7 +22454,7 @@ Finally, in @ref{Segment output}, we will discuss 
options that affect Segment's
 @node Segment input, Segmentation options, Invoking astsegment, Invoking 
astsegment
 @subsubsection Segment input
 
-Besides the input dataset (for example astronomical image), Segment also needs 
to know the Sky standard deviation and the regions of the dataset that it 
should segment.
+Besides the input dataset (for example, astronomical image), Segment also 
needs to know the Sky standard deviation and the regions of the dataset that it 
should segment.
 The values dataset is assumed to be Sky subtracted by default.
 If it is not, you can ask Segment to subtract the Sky internally by calling 
@option{--sky}.
 For the rest of this discussion, we will assume it is already sky subtracted.
@@ -22461,12 +22462,12 @@ For the rest of this discussion, we will assume it is 
already sky subtracted.
 The Sky and its standard deviation can be a single value (to be used for the 
whole dataset) or a separate dataset (for a separate value per pixel).
 If a dataset is used for the Sky and its standard deviation, they must either 
be the size of the input image, or have a single value per tile (generated with 
@option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).
 
-The detected regions/pixels can be specified as a detection map (for example 
see @ref{NoiseChisel output}).
+The detected regions/pixels can be specified as a detection map (for example, 
see @ref{NoiseChisel output}).
 If @option{--detection=all}, Segment will not read any detection map and 
assume the whole input is a single detection.
-For example when the dataset is fully covered by a large nearby 
galaxy/globular cluster.
+for example, when the dataset is fully covered by a large nearby 
galaxy/globular cluster.
 
 When dataset are to be used for any of the inputs, Segment will assume they 
are multiple extensions of a single file by default (when @option{--std} or 
@option{--detection} are not called).
-For example NoiseChisel's default output @ref{NoiseChisel output}.
+for example, NoiseChisel's default output @ref{NoiseChisel output}.
 When the Sky-subtracted values are in one file, and the detection and Sky 
standard deviation are in another, you just need to use @option{--detection}: 
in the absence of @option{--std}, Segment will look for both the detection 
labels and Sky standard deviation in the file given to @option{--detection}.
 Ultimately, if all three are in separate files, you need to call both 
@option{--detection} and @option{--std}.
 
@@ -22541,11 +22542,11 @@ If your detection map only has integer values, but it 
is stored in a floating po
 $ astarithmetic float.fits int32 --output=int.fits
 @end example
 
-It may happen that the whole input dataset is covered by signal, for example 
when working on parts of the Andromeda galaxy, or nearby globular clusters 
(that cover the whole field of view).
+It may happen that the whole input dataset is covered by signal, for example, 
when working on parts of the Andromeda galaxy, or nearby globular clusters 
(that cover the whole field of view).
 In such cases, segmentation is necessary over the complete dataset, not just 
specific regions (detections).
 By default Segment will first use the undetected regions as a reference to 
find the proper signal-to-noise ratio of ``true'' clumps (give a purity level 
specified with @option{--snquant}).
 Therefore, in such scenarios you also need to manually give a ``true'' clump 
signal-to-noise ratio with the @option{--clumpsnthresh} option to disable 
looking into the undetected regions, see @ref{Segmentation options}.
-In such cases, is possible to make a detection map that only has the value 
@code{1} for all pixels (for example using @ref{Arithmetic}), but for 
convenience, you can also use @option{--detection=all}.
+In such cases, is possible to make a detection map that only has the value 
@code{1} for all pixels (for example, using @ref{Arithmetic}), but for 
convenience, you can also use @option{--detection=all}.
 
 @item --dhdu
 The HDU/extension containing the detection map given to @option{--detection}.
@@ -22616,7 +22617,7 @@ Please see the descriptions there for more.
 @item --minima
 Build the clumps based on the local minima, not maxima.
 By default, clumps are built starting from local maxima (see Figure 8 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}).
-Therefore, this option can be useful when you are searching for true local 
minima (for example absorption features).
+Therefore, this option can be useful when you are searching for true local 
minima (for example, absorption features).
 
 @item -m INT
 @itemx --snminarea=INT
@@ -22725,13 +22726,13 @@ $ astfits image_segmented.fits -h0 | grep -i snquant
 @cindex DS9
 @cindex SAO DS9
 By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's 
output will also contain the (technically redundant) input dataset and the sky 
standard deviation dataset (if it was not a constant number).
-This can help in visually inspecting the result when viewing the images as a 
``Multi-extension data cube'' in SAO DS9 for example (see @ref{Viewing FITS 
file contents with DS9 or TOPCAT}).
+This can help in visually inspecting the result when viewing the images as a 
``Multi-extension data cube'' in SAO DS9 for example, (see @ref{Viewing FITS 
file contents with DS9 or TOPCAT}).
 You can simply flip through the extensions and see the same region of the 
image and its corresponding clumps/object labels.
 It also makes it easy to feed the output (as one file) into MakeCatalog when 
you intend to make a catalog afterwards (see @ref{MakeCatalog}.
-To remove these redundant extensions from the output (for example when 
designing a pipeline), you can use @option{--rawoutput}.
+To remove these redundant extensions from the output (for example, when 
designing a pipeline), you can use @option{--rawoutput}.
 
 The @code{OBJECTS} and @code{CLUMPS} extensions can be used as input into 
@ref{MakeCatalog} to generate a catalog for higher-level analysis.
-If you want to treat each clump separately, you can give a very large value 
(or even a NaN, which will always fail) to the @option{--gthresh} option (for 
example @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation 
options}.
+If you want to treat each clump separately, you can give a very large value 
(or even a NaN, which will always fail) to the @option{--gthresh} option (for 
example, @code{--gthresh=1e10} or @code{--gthresh=nan}), see @ref{Segmentation 
options}.
 
 For a complete definition of clumps and objects, please see Section 3.2 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and 
@ref{Segmentation options}.
 The clumps are ``true'' local maxima (minima if @option{--minima} is called) 
and their surrounding pixels until a local minimum/maximum (caused by noise 
fluctuations, or another ``true'' clump).
@@ -22808,11 +22809,11 @@ This process has been described for the GNOME GUI 
(most common GUI in GNU/Linux
 @node MakeCatalog, Match, Segment, Data analysis
 @section MakeCatalog
 
-At the lowest level, a dataset (for example an image) is just a collection of 
values, placed after each other in any number of dimensions (for example an 
image is a 2D dataset).
+At the lowest level, a dataset (for example, an image) is just a collection of 
values, placed after each other in any number of dimensions (for example, an 
image is a 2D dataset).
 Each data-element (pixel) just has two properties: its position (relative to 
the rest) and its value.
 In higher-level analysis, an entire dataset (an image for example) is rarely 
treated as a singular entity@footnote{You can derive the over-all properties of 
a complete dataset (1D table column, 2D image, or 3D data-cube) treated as a 
single entity with Gnuastro's Statistics program (see @ref{Statistics}).}.
 You usually want to know/measure the properties of the (separate) 
scientifically interesting targets that are embedded in it.
-For example the magnitudes, positions and elliptical properties of the 
galaxies that are in the image.
+for example, the magnitudes, positions and elliptical properties of the 
galaxies that are in the image.
 
 MakeCatalog is Gnuastro's program for localized measurements over a dataset.
 In other words, MakeCatalog is Gnuastro's program to convert low-level 
datasets (like images), to high level catalogs.
@@ -22841,11 +22842,11 @@ Following Gnuastro's modularity principle, there are 
separate and highly special
 
 These programs will/can return labeled dataset(s) to be fed into MakeCatalog.
 A labeled dataset for measurement has the same size/dimensions as the input, 
but with integer valued pixels that have the label/counter for each sub-set of 
pixels that must be measured together.
-For example all the pixels covering one galaxy in an image, get the same label.
+for example, all the pixels covering one galaxy in an image, get the same 
label.
 
 The requested measurements are then done on similarly labeled pixels.
 The final result is a catalog where each row corresponds to the measurements 
on pixels with a specific label.
-For example the flux weighted average position of all the pixels with a label 
of 42 will be written into the 42nd row of the output catalog/table's central 
position column@footnote{See @ref{Measuring elliptical parameters} for a 
discussion on this and the derivation of positional parameters, which includes 
the center.}.
+for example, the flux weighted average position of all the pixels with a label 
of 42 will be written into the 42nd row of the output catalog/table's central 
position column@footnote{See @ref{Measuring elliptical parameters} for a 
discussion on this and the derivation of positional parameters, which includes 
the center.}.
 Similarly, the sum of all these pixels will be the 42nd row in the brightness 
column, etc.
 Pixels with labels equal to, or smaller than, zero will be ignored by 
MakeCatalog.
 In other words, the number of rows in MakeCatalog's output is already known 
before running it (the maximum value of the labeled dataset).
@@ -22868,7 +22869,7 @@ For those who feel MakeCatalog's existing 
measurements/columns are not enough an
 @node Detection and catalog production, Brightness flux magnitude, 
MakeCatalog, MakeCatalog
 @subsection Detection and catalog production
 
-Most existing common tools in low-level astronomical data-analysis (for 
example 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
+Most existing common tools in low-level astronomical data-analysis (for 
example, 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
 However, in light of Gnuastro's modularized approach (modeled on the Unix 
system) detection is separated from measurements and catalog production.
 This modularity is therefore new to many experienced astronomers and deserves 
a short review here.
 Further discussion on the benefits of this methodology can be seen in 
@url{https://arxiv.org/abs/1611.06387v1, Akhlaghi [2016]}.
@@ -22933,9 +22934,9 @@ We will continue the discussion assuming the pixels are 
in units of energy/time.
 @cindex Brightness
 @item Brightness
 The @emph{brightness} of an object is defined as its total detected energy per 
time.
-In the case of an imaged source, this is simply the sum of the pixels that are 
associated with that detection by our detection tool (for example 
@ref{NoiseChisel}@footnote{If further processing is done, for example the Kron 
or Petrosian radii are calculated, then the detected area is not sufficient and 
the total area that was within the respective radius must be used.}).
+In the case of an imaged source, this is simply the sum of the pixels that are 
associated with that detection by our detection tool (for example, 
@ref{NoiseChisel}@footnote{If further processing is done, for example, the Kron 
or Petrosian radii are calculated, then the detected area is not sufficient and 
the total area that was within the respective radius must be used.}).
 The @emph{flux} of an object is defined in units of 
energy/time/collecting-area.
-For an astronomical target, the flux is therefore defined as its brightness 
divided by the area used to collect the light from the source: or the telescope 
aperture (for example in units of @mymath{cm^2}).
+For an astronomical target, the flux is therefore defined as its brightness 
divided by the area used to collect the light from the source: or the telescope 
aperture (for example, in units of @mymath{cm^2}).
 Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can 
define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
 
 Therefore, while flux and luminosity are intrinsic properties of the object, 
brightness depends on our detecting tools (hardware and software).
@@ -22994,7 +22995,7 @@ Because the image zero point corresponds to a pixel 
value of @mymath{1}, the @my
 Therefore you simply have to multiply your image by @mymath{B_z} to convert it 
to @mymath{\mu{Jy}}.
 Do Not forget that this only applies when your zero point was also estimated 
in the AB magnitude system.
 On the command-line, you can estimate this value for a certain zero point with 
AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
-For example let's assume you are using an SDSS image with a zero point of 22.5:
+for example, let's assume you are using an SDSS image with a zero point of 
22.5:
 
 @example
 bz=$(echo 22.5 | awk '@{print 3631 * 10^(6-$1/2.5)@}')
@@ -23038,7 +23039,7 @@ Let's derive the equation below:
 @dispmath{m = -2.5\log_{10}(C/t) + Z = -2.5\log_{10}(C) + 2.5\log_{10}(t) + Z}
 Therefore, defining an exposure-time-dependent zero point as @mymath{Z(t)}, we 
can directly correlate a certain object's magnitude with counts after an 
exposure of @mymath{t} seconds:
 @dispmath{m = -2.5\log_{10}(C) + Z(t) \quad\rm{where}\quad Z(t)=Z + 
2.5\log_{10}(t)}
-This solution is useful in programs like @ref{MakeCatalog} or 
@ref{MakeProfiles}, when you cannot (or do not want to: because of the extra 
storage/speed costs) manipulate the values image (for example divide it by the 
exposure time to use a counts/sec zero point).
+This solution is useful in programs like @ref{MakeCatalog} or 
@ref{MakeProfiles}, when you cannot (or do not want to: because of the extra 
storage/speed costs) manipulate the values image (for example, divide it by the 
exposure time to use a counts/sec zero point).
 
 @item Surface brightness
 @cindex Steradian
@@ -23052,7 +23053,7 @@ The solid angle is expressed in units of 
arcsec@mymath{^2} because astronomical
 Recall that the steradian is the dimension-less SI unit of a solid angle and 1 
steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial 
sphere.
 
 Surface brightness is therefore most commonly expressed in units of 
mag/arcsec@mymath{2}.
-For example when the brightness is measured over an area of A 
arcsec@mymath{^2}, then the surface brightness becomes:
+for example, when the brightness is measured over an area of A 
arcsec@mymath{^2}, then the surface brightness becomes:
 
 @dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
 
@@ -23084,7 +23085,7 @@ And see @ref{FITS images in a publication} for a fully 
working tutorial on how t
 @noindent
 @strong{Do Not warp or convolve magnitude or surface brightness images:} 
Warping an image involves calculating new pixel values (of the new pixel grid) 
from the old pixel values.
 Convolution is also a process of finding the weighted mean of pixel values.
-During these processes, many arithmetic operations are done on the original 
pixel values, for example addition or multiplication.
+During these processes, many arithmetic operations are done on the original 
pixel values, for example, addition or multiplication.
 However, @mymath{log_{10}(a+b)\ne log_{10}(a)+log_{10}(b)}.
 Therefore after calculating a magnitude or surface brightness image, do not 
apply any such operations on it!
 If you need to warp or convolve the image, do it @emph{before} the conversion.
@@ -23213,7 +23214,7 @@ The standard deviation (@mymath{\sigma}) of that 
distribution can be used to qua
 @dispmath{M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad 
[mag/target]}
 
 @cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds) and there was 
not much processing involved (to make a deep stack).
+Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example, with a diameter of @mymath{N} arc-seconds) and there 
was not much processing involved (to make a deep stack).
 Hence, the upper limit was synonymous with the surface brightness limit 
discussed above: one value for the whole image.
 The problem with this simplified approach is that the number of pixels in the 
aperture directly affects the final distribution and thus magnitude.
 Also the image correlated noise might actually create certain patterns, so the 
shape of the object can also affect the final result.
@@ -23345,7 +23346,7 @@ The elliptical parameters are: the (semi-)major axis, 
the (semi-)minor axis and
 The derivations below follow the SExtractor manual derivations with some added 
explanations for easier reading.
 
 @cindex Moments
-Let's begin with one dimension for simplicity: Assume we have a set of 
@mymath{N} values @mymath{B_i} (for example showing the spatial distribution of 
a target's brightness), each at position @mymath{x_i}.
+Let's begin with one dimension for simplicity: Assume we have a set of 
@mymath{N} values @mymath{B_i} (for example, showing the spatial distribution 
of a target's brightness), each at position @mymath{x_i}.
 The simplest parameter we can define is the geometric center of the object 
(@mymath{x_g}) (ignoring the brightness values): @mymath{x_g=(\sum_ix_i)/N}.
 @emph{Moments} are defined to incorporate both the value (brightness) and 
position of the data.
 The first moment can be written as:
@@ -23582,7 +23583,7 @@ Finally, in @ref{MakeCatalog output} the output file(s) 
created by MakeCatalog a
 MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
 This dataset maps/labels pixels to a specific target (row number in the final 
catalog) and is thus the only necessary input dataset to produce a minimal 
catalog in any situation.
 Because it only has labels/counters, it must have an integer type (see 
@ref{Numeric data types}), see below if your labels are in a floating point 
container.
-When the requested measurements only need this dataset (for example 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog will not 
read any more datasets.
+When the requested measurements only need this dataset (for example, 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog will not 
read any more datasets.
 
 Low-level measurements that only use the labeled image are rarely sufficient 
for any high-level science case.
 Therefore necessary input datasets depend on the requested columns in each run.
@@ -23606,7 +23607,7 @@ Since clumps are defined within detected regions (they 
exist over signal, not no
 There are separate options to explicitly request a file name and HDU/extension 
for each of the required input datasets as fully described below (with the 
@option{--*file} format).
 When each dataset is in a separate file, these options are necessary.
 However, one great advantage of the FITS file format (that is heavily used in 
astronomy) is that it allows the storage of multiple datasets in one file.
-So in most situations (for example if you are using the outputs of 
@ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in 
one file.
+So in most situations (for example, if you are using the outputs of 
@ref{NoiseChisel} or @ref{Segment}), all the necessary input datasets can be in 
one file.
 
 When none of the @option{--*file} options are given, MakeCatalog will assume 
the necessary input datasets are in the file given as its argument (without any 
option).
 When the Sky or Sky standard deviation datasets are necessary and the only 
@option{--*file} option called is @option{--valuesfile}, MakeCatalog will 
search for these datasets (with the default/given HDUs) in the file given to 
@option{--valuesfile} (before looking into the main argument file).
@@ -23618,7 +23619,7 @@ This can slightly slow-down the run.
 
 Note that @code{NUMLABS} is automatically written by Segment in its outputs, 
so if you are feeding Segment's clump labels, you can benefit from the improved 
speed.
 Otherwise, if you are creating the clumps label dataset manually, it may be 
good to include the @code{NUMLABS} keyword in its header and also be sure that 
there is no gap in the clump labels.
-For example if an object has three clumps, they are labeled as 1, 2, 3.
+for example, if an object has three clumps, they are labeled as 1, 2, 3.
 If they are labeled as 1, 3, 4, or any other combination of three positive 
integers that are not an increment of the previous, you might get unknown 
behavior.
 
 It may happen that your labeled objects image was created with a program that 
only outputs floating point files.
@@ -23650,7 +23651,7 @@ See @ref{Segment output} for a description of the 
expected format.
 @item -v FITS
 @itemx --valuesfile=FITS
 The file name of the (sky-subtracted) values dataset.
-When any of the columns need values to associate with the input labels (for 
example to measure the brightness/magnitude of a galaxy), MakeCatalog will look 
into a ``values'' for the respective pixel values.
+When any of the columns need values to associate with the input labels (for 
example, to measure the brightness/magnitude of a galaxy), MakeCatalog will 
look into a ``values'' for the respective pixel values.
 In most common processing, this is the actual astronomical image that the 
labels were defined, or detected, over.
 The HDU/extension of this dataset in the given file can be specified with 
@option{--valueshdu}.
 If this option is not called, MakeCatalog will look for the given extension in 
the main input file.
@@ -23695,7 +23696,7 @@ The dataset given to @option{--instd} (and 
@option{--stdhdu} has the Sky varianc
 
 @item --forcereadstd
 Read the input STD image even if it is not required by any of the requested 
columns.
-This is because some of the output catalog's metadata may need it, for example 
to calculate the dataset's surface brightness limit (see @ref{Quantifying 
measurement limits}, configured with @option{--sfmagarea} and 
@option{--sfmagnsigma} in @ref{MakeCatalog output}).
+This is because some of the output catalog's metadata may need it, for 
example, to calculate the dataset's surface brightness limit (see 
@ref{Quantifying measurement limits}, configured with @option{--sfmagarea} and 
@option{--sfmagnsigma} in @ref{MakeCatalog output}).
 
 Furthermore, if the input STD image does not have the @code{MEDSTD} keyword 
(that is meant to contain the representative standard deviation of the full 
image), with this option, the median will be calculated and used for the 
surface brightness limit.
 
@@ -23704,7 +23705,7 @@ Furthermore, if the input STD image does not have the 
@code{MEDSTD} keyword (tha
 The zero point magnitude for the input image, see @ref{Brightness flux 
magnitude}.
 
 @item --sigmaclip FLT,FLT
-The sigma-clipping parameters when any of the sigma-clipping related columns 
are requested (for example @option{--sigclip-median} or 
@option{--sigclip-number}).
+The sigma-clipping parameters when any of the sigma-clipping related columns 
are requested (for example, @option{--sigclip-median} or 
@option{--sigclip-number}).
 
 This option takes two values: the first is the multiple of @mymath{\sigma}, 
and the second is the termination criteria.
 If the latter is larger than 1, it is read as an integer number and will be 
the number of times to clip.
@@ -23712,14 +23713,14 @@ If it is smaller than 1, it is interpreted as the 
tolerance level to stop clippi
 See @ref{Sigma clipping} for a complete explanation.
 
 @item --fracmax=FLT[,FLT]
-The fractions (one or two) of maximum value in objects or clumps to be used in 
the related columns, for example @option{--fracmaxarea1}, 
@option{--fracmaxsum1} or @option{--fracmaxradius1}, see @ref{MakeCatalog 
measurements}.
+The fractions (one or two) of maximum value in objects or clumps to be used in 
the related columns, for example, @option{--fracmaxarea1}, 
@option{--fracmaxsum1} or @option{--fracmaxradius1}, see @ref{MakeCatalog 
measurements}.
 For the maximum value, see the description of @option{--maximum} column below.
 The value(s) of this option must be larger than 0 and smaller than 1 (they are 
a fraction).
 When only @option{--fracmaxarea1} or @option{--fracmaxsum1} is requested, one 
value must be given to this option, but if @option{--fracmaxarea2} or 
@option{--fracmaxsum2} are also requested, two values must be given to this 
option.
-The values can be written as simple floating point numbers, or as fractions, 
for example @code{0.25,0.75} and @code{0.25,3/4} are the same.
+The values can be written as simple floating point numbers, or as fractions, 
for example, @code{0.25,0.75} and @code{0.25,3/4} are the same.
 
 @item --spatialresolution=FLT
-The error in measuring spatial properties (for example the area) in units of 
pixels.
+The error in measuring spatial properties (for example, the area) in units of 
pixels.
 You can think of this as the FWHM of the dataset's PSF and is used in 
measurements like the error in surface brightness (@option{--sberror}, see 
@ref{MakeCatalog measurements}).
 Ideally, images are taken in the optimal Nyquist sampling @ref{Sampling 
theorem}, so the default value for this option is 2.
 But in practice real images my be over-sampled (usually ground-based images, 
where you will need to increase the default value) or undersampled (some 
space-based images, where you will need to decrease the default value).
@@ -23820,7 +23821,7 @@ The total number of rows is thus unknown, but you can 
be sure that the number of
 @subsubsection MakeCatalog measurements
 
 The final group of options particular to MakeCatalog are those that specify 
which measurements/columns should be written into the final output table.
-The current measurements in MakeCatalog are those which only produce one final 
value for each label (for example its total brightness: a single number).
+The current measurements in MakeCatalog are those which only produce one final 
value for each label (for example, its total brightness: a single number).
 All the different label's measurements can be written as one column in a final 
table/catalog that contains other columns for other similar single-number 
measurements.
 
 In this case, all the different label's measurements can be written as one 
column in a final table/catalog that contains other columns for other similar 
single-number measurements.
@@ -24010,7 +24011,7 @@ The brightness (sum of all pixel values), see 
@ref{Brightness flux magnitude}.
 For clumps, the ambient brightness (flux of river pixels around the clump 
multiplied by the area of the clump) is removed, see @option{--riverave}.
 So the sum of all the clumps brightness in the clump catalog will be smaller 
than the total clump brightness in the @option{--clumpbrightness} column of the 
objects catalog.
 
-If no usable pixels are present over the clump or object (for example they are 
all blank), the returned value will be NaN (note that zero is meaningful).
+If no usable pixels are present over the clump or object (for example, they 
are all blank), the returned value will be NaN (note that zero is meaningful).
 
 @item --brightnesserr
 The (@mymath{1\sigma}) error in measuring the brightness of a label (objects 
or clumps).
@@ -24022,14 +24023,14 @@ If you want to ignore those pixels in the error 
measurement, set them to zero (w
 @item --clumpbrightness
 [Objects] The total brightness of the clumps within an object.
 This is simply the sum of the pixels associated with clumps in the object.
-If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
+If no usable pixels are present over the clump or object (for example, they 
are all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --brightnessnoriver
 [Clumps] The Sky (not river) subtracted clump brightness.
 By definition, for the clumps, the average brightness of the rivers 
surrounding it are subtracted from it for a first order accounting for 
contamination by neighbors.
 In cases where you will be calculating the flux brightness difference later 
(one example below) the contamination will be (mostly) removed at that stage, 
which is why this column was added.
 
-If no usable pixels are present over the clump or object (for example they are 
all blank), the stored value will be NaN (note that zero is meaningful).
+If no usable pixels are present over the clump or object (for example, they 
are all blank), the stored value will be NaN (note that zero is meaningful).
 
 @item --mean
 The mean sky subtracted value of pixels within the object or clump.
@@ -24047,7 +24048,7 @@ For clumps, the average river flux is subtracted from 
the sky subtracted median.
 The maximum value of pixels within the object or clump.
 When the label (object or clump) is larger than three pixels, the maximum is 
actually derived by the mean of the brightest three pixels, not the largest 
pixel value of the same label.
 This is because noise fluctuations can be very strong in the extreme values of 
the objects/clumps due to Poisson noise (which gets stronger as the mean gets 
higher).
-Simply using the maximum pixel value will create a strong scatter in results 
that depend on the maximum (for example the @option{--fwhm} option also uses 
this value internally).
+Simply using the maximum pixel value will create a strong scatter in results 
that depend on the maximum (for example, the @option{--fwhm} option also uses 
this value internally).
 
 @item --sigclip-number
 The number of elements/pixels in the dataset after sigma-clipping the object 
or clump.
@@ -24202,7 +24203,7 @@ This is because objects can have more than one clump 
(peak with true signal) and
 Also, because of its non-parametric nature, this FWHM it does not account for 
the PSF, and it will be strongly affected by noise if the object is 
faint/diffuse
 So when half the maximum value (which can be requested using the 
@option{--maximum} column) is too close to the local noise level (which can be 
requested using the @option{--skystd} column), the value returned in this 
column is meaningless (its just noise peaks which are randomly distributed over 
the area).
 You can therefore use the @option{--maximum} and @option{--skystd} columns to 
remove, or flag, unreliable FWHMs.
-For example if a labeled region's maximum is less than 2 times the sky 
standard deviation, the value will certainly be unreliable (half of that is 
@mymath{1\sigma}!).
+for example, if a labeled region's maximum is less than 2 times the sky 
standard deviation, the value will certainly be unreliable (half of that is 
@mymath{1\sigma}!).
 For a more reliable value, this fraction should be around 4 (so half the 
maximum is 2@mymath{\sigma}).
 
 @item --halfmaxarea
@@ -24371,7 +24372,7 @@ the same physical object.
 Output will contain one row for all integers between 1 and the largest label 
in the input (irrespective of their existance in the input image).
 By default, MakeCatalog's output will only contain rows with integers that 
actually corresponded to at least one pixel in the input dataset.
 
-For example if the input's only labeled pixel values are 11 and 13, 
MakeCatalog's default output will only have two rows.
+for example, if the input's only labeled pixel values are 11 and 13, 
MakeCatalog's default output will only have two rows.
 If you use this option, it will have 13 rows and all the columns corresponding 
to integer identifiers that did not correspond to any pixel will be 0 or NaN 
(depending on context).
 @end table
 
@@ -24399,7 +24400,7 @@ For more, see the description of @option{--spectrum} in 
@ref{MakeCatalog measure
 @cindex Limit, Surface brightness
 When possible, MakeCatalog will also measure the full input's noise level 
(also known as surface brightness limit, see @ref{Quantifying measurement 
limits}).
 Since these measurements are related to the noise and not any particular 
labeled object, they are stored as keywords in the output table.
-Furthermore, they are only possible when a standard deviation image has been 
loaded (done automatically for any column measurement that involves noise, for 
example @option{--sn}, @option{--magnitudeerr} or @option{--skystd}).
+Furthermore, they are only possible when a standard deviation image has been 
loaded (done automatically for any column measurement that involves noise, for 
example, @option{--sn}, @option{--magnitudeerr} or @option{--skystd}).
 But if you just want the surface brightness limit and no noise-related column, 
you can use @option{--forcereadstd}.
 All these keywords start with @code{SBL} (for ``surface brightness limit'') 
and are described below:
 
@@ -24442,7 +24443,7 @@ Therefore, if the output is a text file, two output 
files will be created, endin
 @item --noclumpsort
 Do Not sort the clumps catalog based on object ID (only relevant with 
@option{--clumpscat}).
 This option will benefit the performance@footnote{The performance boost due to 
@option{--noclumpsort} can only be felt when there are a huge number of objects.
-Therefore, by default the output is sorted to avoid miss-understandings or 
bugs in the user's scripts when the user forgets to sort the outputs.} of 
MakeCatalog when it is run on multiple threads @emph{and} the position of the 
rows in the clumps catalog is irrelevant (for example you just want the 
number-counts).
+Therefore, by default the output is sorted to avoid miss-understandings or 
bugs in the user's scripts when the user forgets to sort the outputs.} of 
MakeCatalog when it is run on multiple threads @emph{and} the position of the 
rows in the clumps catalog is irrelevant (for example, you just want the 
number-counts).
 
 MakeCatalog does all its measurements on each @emph{object} independently and 
in parallel.
 As a result, while it is writing the measurements on each object's clumps, it 
does not know how many clumps there were in previous objects.
@@ -24457,13 +24458,13 @@ $ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
 
 @item --sfmagnsigma=FLT
 Value to multiply with the median standard deviation (from a @command{MEDSTD} 
keyword in the Sky standard deviation image) for estimating the surface 
brightness limit.
-Note that the surface brightness limit is only reported when a standard 
deviation image is read, in other words a column using it is requested (for 
example @option{--sn}) or @option{--forcereadstd} is called.
+Note that the surface brightness limit is only reported when a standard 
deviation image is read, in other words a column using it is requested (for 
example, @option{--sn}) or @option{--forcereadstd} is called.
 
 This value is a per-pixel value, not per object/clump and is not found over an 
area or aperture, like the common @mymath{5\sigma} values that are commonly 
reported as a measure of depth or the upper-limit measurements (see 
@ref{Quantifying measurement limits}).
 
 @item --sfmagarea=FLT
 Area (in arc-seconds squared) to convert the per-pixel estimation of 
@option{--sfmagnsigma} in the comments section of the output tables.
-Note that the surface brightness limit is only reported when a standard 
deviation image is read, in other words a column using it is requested (for 
example @option{--sn}) or @option{--forcereadstd} is called.
+Note that the surface brightness limit is only reported when a standard 
deviation image is read, in other words a column using it is requested (for 
example, @option{--sn}) or @option{--forcereadstd} is called.
 
 Note that this is just a unit conversion using the World Coordinate System 
(WCS) information in the input's header.
 It does not actually do any measurements on this area.
@@ -24493,7 +24494,7 @@ For random measurements on any area, please use the 
upper-limit columns of MakeC
 @section Match
 
 Data can come come from different telescopes, filters, software and even 
different configurations for a single software.
-As a result, one of the primary things to do after generating catalogs from 
each of these sources (for example with @ref{MakeCatalog}), is to find which 
sources in one catalog correspond to which in the other(s).
+As a result, one of the primary things to do after generating catalogs from 
each of these sources (for example, with @ref{MakeCatalog}), is to find which 
sources in one catalog correspond to which in the other(s).
 In other words, to `match' the two catalogs with each other.
 
 Gnuastro's Match program is in charge of such operations.
@@ -24518,7 +24519,7 @@ This basic parsing algorithm is very computationally 
expensive:
 If an elliptical aperture is necessary, it can even get more complicated, see 
@ref{Defining an ellipse and ellipsoid}.
 Such operations are not simple, and will consume many cycles of your CPU!
 As a result, this basic algorithm will become terribly slow as your datasets 
grow in size.
-For example when N or M exceed hundreds of thousands (which is common in the 
current days with datasets like the European Space Agency's Gaia mission).
+for example, when N or M exceed hundreds of thousands (which is common in the 
current days with datasets like the European Space Agency's Gaia mission).
 Therefore that basic parsing algorithm will take too much time and more 
efficient ways to @emph{find the nearest neighbor} need to be found.
 Gnuastro's Match currently has algorithms for finding the nearest neighbor:
 
@@ -24538,7 +24539,7 @@ Therefore, with a single parsing of both 
simultaneously, for each A-point, we ca
 This method has some caveats:
 1) It requires sorting, which can again be slow on large numbers.
 2) It can only be done on a single CPU thread! So it cannot benefit from the 
modern CPUs with many threads.
-3) There is no way to preserve intermediate information for future matches, 
for example this can greatly help when one of the matched datasets is always 
the same.
+3) There is no way to preserve intermediate information for future matches, 
for example, this can greatly help when one of the matched datasets is always 
the same.
 To use this sorting method in Match, use @option{--kdtree=disable}.
 
 @item k-d tree based
@@ -24589,7 +24590,7 @@ The good match (@mymath{B_k} with @mymath{A_i} will 
therefore be lost, and repla
 
 In a single-dimensional match, this bias depends on the sorting of your two 
datasets (leading to different matches if you shuffle your datasets).
 But it will get more complex as you add dimensionality.
-For example catalogs derived from 2D images or 3D cubes, where you have 2 and 
3 different coordinates for each point.
+for example, catalogs derived from 2D images or 3D cubes, where you have 2 and 
3 different coordinates for each point.
 
 To address this problem, in Gnuastro (the Match program, or the matching 
functions of the library) similar to above, we first parse over the elements of 
B.
 But we will not associate the first nearest-neighbor with a match!
@@ -24687,7 +24688,7 @@ The input tables can be plain text tables or FITS 
tables, for more see @ref{Tabl
 But other ways of feeding inputs area also supported:
 @itemize
 @item
-The @emph{first} catalog can also come from the standard input (for example a 
pipe that feeds the output of a previous command to Match, see @ref{Standard 
input});
+The @emph{first} catalog can also come from the standard input (for example, a 
pipe that feeds the output of a previous command to Match, see @ref{Standard 
input});
 @item
 When you only want to match one point with another catalog, you can use the 
@option{--coord} option to avoid creating a file for the @emph{second} input 
catalog.
 @end itemize
@@ -24775,10 +24776,10 @@ The first character of each string specifies the 
input catalog: @option{a} for t
 The rest of the characters of the string will be directly used to identify the 
proper column(s) in the respective table.
 See @ref{Selecting table columns} for how columns can be specified in Gnuastro.
 
-For example the output of @option{--outcols=a1,bRA,bDEC} will have three 
columns: the first column of the first input, along with the @option{RA} and 
@option{DEC} columns of the second input.
+for example, the output of @option{--outcols=a1,bRA,bDEC} will have three 
columns: the first column of the first input, along with the @option{RA} and 
@option{DEC} columns of the second input.
 
 If the string after @option{a} or @option{b} is @option{_all}, then all the 
columns of the respective input file will be written in the output.
-For example the command below will print all the input columns from the first 
catalog along with the 5th column from the second:
+for example, the command below will print all the input columns from the first 
catalog along with the 5th column from the second:
 
 @example
 $ astmatch a.fits b.fits --outcols=a_all,b5
@@ -24794,11 +24795,11 @@ Combined with regular expressions in large tables, 
this can be a very powerful a
 When @option{--coord} is given, no second catalog will be read.
 The second catalog will be created internally based on the values given to 
@option{--coord}.
 So column names are not defined and you can only request integer column 
numbers that are less than the number of coordinates given to @option{--coord}.
-For example if you want to find the row matching RA of 1.2345 and Dec of 
6.7890, then you should use @option{--coord=1.2345,6.7890}.
+for example, if you want to find the row matching RA of 1.2345 and Dec of 
6.7890, then you should use @option{--coord=1.2345,6.7890}.
 But when using @option{--outcols}, you cannot give @code{bRA}, or @code{b25}.
 
 @item With @option{--notmatched}
-Only the column names/numbers should be given (for example 
@option{--outcols=RA,DEC,MAGNITUDE}).
+Only the column names/numbers should be given (for example, 
@option{--outcols=RA,DEC,MAGNITUDE}).
 It is assumed that both input tables have the requested column(s) and that the 
numerical data types of each column in each input (with same name) is the same 
as the corresponding column in the other.
 Therefore if one input has a @code{MAGNITUDE} column with a 32-bit floating 
point type, but the @code{MAGNITUDE} column of the other is 64-bit floating 
point, Match will crash with an error.
 The metadata of the columns will come from the first input.
@@ -24851,13 +24852,13 @@ When this option is called, the output changes in the 
following ways:
 1) when @option{--outcols} is specified, for the second input, it can only 
accept integer numbers that are less than the number of values given to this 
option, see description of that option for more.
 2) By default (when @option{--outcols} is not used), only the matching row of 
the first table will be output (a single file), not two separate files (one for 
each table).
 
-This option is good when you have a (large) catalog and only want to match a 
single coordinate to it (for example to find the nearest catalog entry to your 
desired point).
+This option is good when you have a (large) catalog and only want to match a 
single coordinate to it (for example, to find the nearest catalog entry to your 
desired point).
 With this option, you can write the coordinates on the command-line and thus 
avoid the need to make a single-row file.
 
 @item -a FLT[,FLT[,FLT]]
 @itemx --aperture=FLT[,FLT[,FLT]]
 Parameters of the aperture for matching.
-The values given to this option can be fractions, for example when the 
position columns are in units of degrees, @option{1/3600} can be used to ask 
for one arc-second.
+The values given to this option can be fractions, for example, when the 
position columns are in units of degrees, @option{1/3600} can be used to ask 
for one arc-second.
 The interpretation of the values depends on the requested dimensions 
(determined from @option{--ccol1} and @code{--ccol2}) and how many values are 
given to this option.
 
 When multiple objects are found within the aperture, the match is defined
@@ -24877,12 +24878,12 @@ To simply the usage, you can determine the shape 
based on the number of free par
 
 @table @asis
 @item 1 number
-For example @option{--aperture=2}.
+for example, @option{--aperture=2}.
 The aperture will be a circle of the given radius.
 The value will be in the same units as the columns in @option{--ccol1} and 
@option{--ccol2}).
 
 @item 2 numbers
-For example @option{--aperture=3,4e-10}.
+for example, @option{--aperture=3,4e-10}.
 The aperture will be an ellipse (if the two numbers are different) with the 
respective value along each dimension.
 The numbers are in units of the first and second axis.
 In the example above, the semi-axis value along the first axis will be 3 (in 
units of the first coordinate) and along the second axis will be 
@mymath{4\times10^{-10}} (in units of the second coordinate).
@@ -24890,7 +24891,7 @@ Such values can happen if you are comparing catalogs of 
a spectra for example.
 If more than one object exists in the aperture, the nearest will be found 
along the major axis as described in @ref{Defining an ellipse and ellipsoid}.
 
 @item 3 numbers
-For example @option{--aperture=2,0.6,30}.
+for example, @option{--aperture=2,0.6,30}.
 The aperture will be an ellipse (if the second value is not 1).
 The first number is the semi-major axis, the second is the axis ratio and the 
third is the position angle (in degrees).
 If multiple matches are found within the ellipse, the distance (to find the 
nearest) is calculated along the major axis in the elliptical space, see 
@ref{Defining an ellipse and ellipsoid}.
@@ -24902,19 +24903,19 @@ To simplifythe usage, the shape can be determined 
based on the number of values
 
 @table @asis
 @item 1 number
-For example @option{--aperture=3}.
+for example, @option{--aperture=3}.
 The matching volume will be a sphere of the given radius.
 The value is in the same units as the input coordinates.
 
 @item 3 numbers
-For example @option{--aperture=4,5,6e-10}.
+for example, @option{--aperture=4,5,6e-10}.
 The aperture will be a general ellipsoid with the respective extent along each 
dimension.
 The numbers must be in the same units as each axis.
 This is very similar to the two number case of 2D inputs.
 See there for more.
 
 @item 6 numbers
-For example @option{--aperture=4,0.5,0.6,10,20,30}.
+for example, @option{--aperture=4,0.5,0.6,10,20,30}.
 The numbers represent the full general ellipsoid definition (in any 
orientation).
 For the definition of a general ellipsoid, see @ref{Defining an ellipse and 
ellipsoid}.
 The first number is the semi-major axis.
@@ -25110,7 +25111,7 @@ When we take an image of it, it will `spread' over an 
area.
 To quantify that spread, we can define a `function'.
 This is how the ``point spread function'' or the PSF of an image is defined.
 
-This `spread' can have various causes, for example in ground-based astronomy, 
due to the atmosphere.
+This `spread' can have various causes, for example, in ground-based astronomy, 
due to the atmosphere.
 In practice we can never surpass the `spread' due to the diffraction of the 
telescope aperture (even in Space!).
 Various other effects can also be quantified through a PSF.
 For example, the simple fact that we are sampling in a discrete space, namely 
the pixels, also produces a very small `spread' in the image.
@@ -25252,7 +25253,7 @@ MacArthur et al.@footnote{MacArthur, L. A., S. 
Courteau, and J. A. Holtzman (200
 @cindex Sampling
 A pixel is the ultimate level of accuracy to gather data, we cannot get any 
more accurate in one image, this is known as sampling in signal processing.
 However, the mathematical profiles which describe our models have infinite 
accuracy.
-Over a large fraction of the area of astrophysically interesting profiles (for 
example galaxies or PSFs), the variation of the profile over the area of one 
pixel is not too significant.
+Over a large fraction of the area of astrophysically interesting profiles (for 
example, galaxies or PSFs), the variation of the profile over the area of one 
pixel is not too significant.
 In such cases, the elliptical radius (@mymath{r_{el}}) of the center of the 
pixel can be assigned as the final value of the pixel, (see @ref{Defining an 
ellipse and ellipsoid}).
 
 @cindex Integration over pixel
@@ -25420,7 +25421,7 @@ Finally @ref{MakeProfiles output dataset} and 
@ref{MakeProfiles log file} discus
 Besides these, MakeProfiles also supports all the common Gnuastro program 
options that are discussed in @ref{Common options}, so please flip through them 
is well for a more comfortable usage.
 
 When building 3D profiles, there are more degrees of freedom.
-Hence, more columns are necessary and all the values related to dimensions 
(for example size of dataset in each dimension and the WCS properties) must 
also have 3 values.
+Hence, more columns are necessary and all the values related to dimensions 
(for example, size of dataset in each dimension and the WCS properties) must 
also have 3 values.
 To allow having an independent set of default values for creating 3D profiles, 
MakeProfiles also installs a @file{astmkprof-3d.conf} configuration file (see 
@ref{Configuration files}).
 You can use this for default 3D profile values.
 For example, if you installed Gnuastro with the prefix @file{/usr/local} (the 
default location, see @ref{Installation directory}), you can benefit from this 
configuration file by running MakeProfiles like the example below.
@@ -25434,7 +25435,7 @@ $ astmkprof --config=/usr/local/etc/astmkprof-3d.conf 
catalog.txt
 @cindex Alias, shell
 @cindex Shell startup
 @cindex Startup, shell
-To further simplify the process, you can define a shell alias in any startup 
file (for example @file{~/.bashrc}, see @ref{Installation directory}).
+To further simplify the process, you can define a shell alias in any startup 
file (for example, @file{~/.bashrc}, see @ref{Installation directory}).
 Assuming that you installed Gnuastro in @file{/usr/local}, you can add this 
line to the startup file (you may put it all in one line, it is broken into two 
lines here for fitting within page limits).
 
 @example
@@ -25462,7 +25463,7 @@ The catalog containing information about each profile 
can be in the FITS ASCII,
 The latter can also be provided using standard input (see @ref{Standard 
input}).
 Its columns can be ordered in any desired manner.
 You can specify which columns belong to which parameters using the set of 
options discussed below.
-For example through the @option{--rcol} and @option{--tcol} options, you can 
specify the column that contains the radial parameter for each profile and its 
truncation respectively.
+for example, through the @option{--rcol} and @option{--tcol} options, you can 
specify the column that contains the radial parameter for each profile and its 
truncation respectively.
 See @ref{Selecting table columns} for a thorough discussion on the values to 
these options.
 
 The value for the profile center in the catalog (the @option{--ccol} option) 
can be a floating point number so the profile center can be on any sub-pixel 
position.
@@ -25490,11 +25491,11 @@ The list of options directly related to the input 
catalog columns is shown below
 @item --ccol=STR/INT
 Center coordinate column for each dimension.
 This option must be called two times to define the center coordinates in an 
image.
-For example @option{--ccol=RA} and @option{--ccol=DEC} (along with 
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns 
named @option{RA} and @option{DEC} for the Right Ascension and Declination of 
the profile centers.
+for example, @option{--ccol=RA} and @option{--ccol=DEC} (along with 
@option{--mode=wcs}) will inform MakeProfiles to look into the catalog columns 
named @option{RA} and @option{DEC} for the Right Ascension and Declination of 
the profile centers.
 
 @item --fcol=INT/STR
 The functional form of the profile with one of the values below depending on 
the desired profile.
-The column can contain either the numeric codes (for example `@code{1}') or 
string characters (for example `@code{sersic}').
+The column can contain either the numeric codes (for example, `@code{1}') or 
string characters (for example, `@code{sersic}').
 The numeric codes are easier to use in scripts which generate catalogs with 
hundreds or thousands of profiles.
 
 The string format can be easier when the catalog is to be written/checked by 
hand/eye before running MakeProfiles.
@@ -25601,7 +25602,7 @@ If @option{--tunitinp} is given, this value is 
interpreted in units of pixels (p
 
 The profile parameters that differ between each created profile are specified 
through the columns in the input catalog and described in @ref{MakeProfiles 
catalog}.
 Besides those there are general settings for some profiles that do not differ 
between one profile and another, they are a property of the general process.
-For example how many random points to use in the monte-carlo integration, this 
value is fixed for all the profiles.
+for example, how many random points to use in the monte-carlo integration, 
this value is fixed for all the profiles.
 The options described in this section are for configuring such properties.
 
 @table @option
@@ -25644,7 +25645,7 @@ Another useful application of this option is to create 
labeled elliptical or cir
 To do this, set the value in the magnitude column to the label you want for 
this profile.
 This labeled image can then be used in combination with NoiseChisel's output 
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see 
@ref{MakeCatalog}).
 
-Alternatively, if you want to mark regions of the image (for example with an 
elliptical circumference) and you do not want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
+Alternatively, if you want to mark regions of the image (for example, with an 
elliptical circumference) and you do not want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
 The minimum will give a better result, because the maximum can be too high 
compared to most pixels in the image, making it harder to display.}
 using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude 
column along with this option for all the profiles.
 
@@ -25706,7 +25707,7 @@ The interval's maximum radius.
 The value to be used for pixels within the given interval (including 
NaN/blank).
 @end table
 
-For example let's assume you have the radial profile below in a file called 
@file{radial.txt}.
+for example, let's assume you have the radial profile below in a file called 
@file{radial.txt}.
 The first column is the larger interval radius (in units of pixels) and the 
second column is the value in that interval:
 
 @example
@@ -25727,7 +25728,7 @@ asttable radial.txt -c'arith $1 1 -' -c1,2 -ocustom.fits
 @end example
 
 @noindent
-In case the intervals are different from 1 (for example 0.5), change the 
@code{$1 1 -} to @code{$1 0.5 -}.
+In case the intervals are different from 1 (for example, 0.5), change the 
@code{$1 1 -} to @code{$1 0.5 -}.
 On a side-note, Gnuastro has features to extract the radial profile of an 
object from the image, see @ref{Generate radial profile}.
 
 By default, once a 2D image is constructed for the radial profile, it will be 
scaled such that its total magnitude corresponds to the value in the magnitude 
column (@option{--mcol}) of the main input catalog.
@@ -25742,7 +25743,7 @@ Multiple files can be given to this option (separated 
by a comma), and this opti
 If the HDU of the images are different, you can use @option{--customimghdu} 
(described below).
 
 Through the ``radius'' column, MakeProfiles will know which one of the images 
given to this option should be used in each row.
-For example let's assume your input catalog (@file{cat.fits}) has the 
following contents (output of first command below), and you call MakeProfiles 
like the second command below to insert four profiles into the background 
@file{back.fits} image.
+for example, let's assume your input catalog (@file{cat.fits}) has the 
following contents (output of first command below), and you call MakeProfiles 
like the second command below to insert four profiles into the background 
@file{back.fits} image.
 
 The first profile below is Sersic (with an @option{--fcol}, or 4-th column, 
code of @code{1}).
 So MakeProfiles builds the pixels of the first profile, and all column values 
are meaningful.
@@ -25800,7 +25801,7 @@ But replace the values.
 
 By default, when two profiles overlap, the final pixel value is the sum of all 
the profiles that overlap on that pixel.
 This is the expected situation when dealing with physical object profiles like 
galaxies or stars/PSF.
-However, when MakeProfiles is used to build integer labeled images (for 
example in @ref{Aperture photometry}), this is not the expected situation: the 
sum of two labels will be a new label.
+However, when MakeProfiles is used to build integer labeled images (for 
example, in @ref{Aperture photometry}), this is not the expected situation: the 
sum of two labels will be a new label.
 With this option, the pixels are not added but the largest (maximum) value 
over that pixel is used.
 Because the maximum operator is independent of the order of values, the output 
is also thread-safe.
 
@@ -25808,7 +25809,7 @@ Because the maximum operator is independent of the 
order of values, the output i
 
 @node MakeProfiles output dataset, MakeProfiles log file, MakeProfiles profile 
settings, Invoking astmkprof
 @subsubsection MakeProfiles output dataset
-MakeProfiles takes an input catalog uses basic properties that are defined 
there to build a dataset, for example a 2D image containing the profiles in the 
catalog.
+MakeProfiles takes an input catalog uses basic properties that are defined 
there to build a dataset, for example, a 2D image containing the profiles in 
the catalog.
 In @ref{MakeProfiles catalog} and @ref{MakeProfiles profile settings}, the 
catalog and profile settings were discussed.
 The options of this section, allow you to configure the output dataset (or the 
canvas that will host the built profiles).
 
@@ -25834,7 +25835,7 @@ The header data unit (HDU) of the file given to 
@option{--background}.
 When an input image is specified (with the @option{--background} option, set 
all its pixels to 0.0 immediately after reading it into memory.
 Effectively, this will allow you to use all its properties (described under 
the @option{--background} option), without having to worry about the pixel 
values.
 
-@option{--clearcanvas} can come in handy in many situations, for example if 
you want to create a labeled image (segmentation map) for creating a catalog 
(see @ref{MakeCatalog}).
+@option{--clearcanvas} can come in handy in many situations, for example, if 
you want to create a labeled image (segmentation map) for creating a catalog 
(see @ref{MakeCatalog}).
 In other cases, you might have modeled the objects in an image and want to 
create them on the same frame, but without the original pixel values.
 
 @item -E STR/INT,FLT[,FLT,[...]]
@@ -25847,7 +25848,7 @@ The Gaussian function needs two parameters: radial and 
truncation radius.
 The point function does not need any parameters and flat and circumference 
profiles just need one parameter (truncation radius).
 
 The PSF or kernel is a unique (and highly constrained) type of profile: the 
sum of its pixels must be one, its center must be the center of the central 
pixel (in an image with an odd number of pixels on each side), and commonly it 
is circular, so its axis ratio and position angle are one and zero respectively.
-Kernels are commonly necessary for various data analysis and data manipulation 
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
+Kernels are commonly necessary for various data analysis and data manipulation 
steps (for example, see @ref{Convolve}, and @ref{NoiseChisel}.
 Because of this it is inconvenient to define a catalog with one row and many 
zero valued columns (for all the non-necessary parameters).
 Hence, with this option, it is possible to create a kernel with MakeProfiles 
without the need to create a catalog.
 Here are some examples:
@@ -25862,10 +25863,10 @@ the FWHM.
 @end table
 
 This option may also be used to create a 3D kernel.
-To do that, two small modifications are necessary: add a @code{-3d} (or 
@code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number 
(axis-ratio along the third dimension) to the end of the parameters for all 
profiles except @code{point}.
+To do that, two small modifications are necessary: add a @code{-3d} (or 
@code{-3D}) to the profile name (for example, @code{moffat-3d}) and add a 
number (axis-ratio along the third dimension) to the end of the parameters for 
all profiles except @code{point}.
 The main reason behind providing an axis ratio in the third dimension is that 
in 3D astronomical datasets, commonly the third dimension does not have the 
same nature (units/sampling) as the first and second.
 
-For example in IFU (optical) or Radio datacubes, the first and second 
dimensions are commonly spatial/angular positions (like RA and Dec) but the 
third dimension is wavelength or frequency (in units of Angstroms for Herz).
+for example, in IFU (optical) or Radio datacubes, the first and second 
dimensions are commonly spatial/angular positions (like RA and Dec) but the 
third dimension is wavelength or frequency (in units of Angstroms for Herz).
 Because of this different nature (which also affects the processing), it may 
be necessary for the kernel to have a different extent in that direction.
 
 If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will 
be a spheroid.
@@ -25891,7 +25892,7 @@ Just call the @option{--individual} and 
@option{--nomerged} options to make sure
 @itemx --mergedsize=INT,INT
 The number of pixels along each axis of the output, in FITS order.
 This is before over-sampling.
-For example if you call MakeProfiles with @option{--mergedsize=100,150 
--oversample=5} (assuming no shift due for later convolution), then the final 
image size along the first axis will be 500 by 750 pixels.
+for example, if you call MakeProfiles with @option{--mergedsize=100,150 
--oversample=5} (assuming no shift due for later convolution), then the final 
image size along the first axis will be 500 by 750 pixels.
 Fractions are acceptable as values for each dimension, however, they must 
reduce to an integer, so @option{--mergedsize=150/3,300/3} is acceptable but 
@option{--mergedsize=150/4,300/4} is not.
 
 When viewing a FITS image in DS9, the first FITS dimension is in the 
horizontal direction and the second is vertical.
@@ -25918,7 +25919,7 @@ To have a final PSF in your image, make a point profile 
where you want the PSF a
 @cindex Build individual profiles
 If this option is called, each profile is created in a separate FITS file 
within the same directory as the output and the row number of the profile 
(starting from zero) in the name.
 The file for each row's profile will be in the same directory as the final 
combined image of all the profiles and will have the final image's name as a 
suffix.
-So for example if the final combined image is named 
@file{./out/fromcatalog.fits}, then the first profile that will be created with 
this option will be named @file{./out/0_fromcatalog.fits}.
+So for example, if the final combined image is named 
@file{./out/fromcatalog.fits}, then the first profile that will be created with 
this option will be named @file{./out/0_fromcatalog.fits}.
 
 Since each image only has one full profile out to the truncation radius the 
profile is centered and so, only the sub-pixel position of the profile center 
is important for the outputs of this option.
 The output will have an odd number of pixels.
@@ -25943,7 +25944,7 @@ See Section 8 of 
@url{https://doi.org/10.1051/0004-6361/201015362, Pence et al [
 The description in the FITS standard (link above) only touches the tip of the 
ice-burg.
 To learn more please see @url{https://doi.org/10.1051/0004-6361:20021326, 
Greisen and Calabretta [2002]}, 
@url{https://doi.org/10.1051/0004-6361:20021327, Calabretta and Greisen 
[2002]}, @url{https://doi.org/10.1051/0004-6361:20053818, Greisen et al. 
[2006]}, and 
@url{http://www.atnf.csiro.au/people/mcalabre/WCS/dcs_20040422.pdf, Calabretta 
et al.}}.
 
-If you look into the headers of a FITS image with WCS for example you will see 
all these names but in uppercase and with numbers to represent the dimensions, 
for example @code{CRPIX1} and @code{PC2_1}.
+If you look into the headers of a FITS image with WCS for example, you will 
see all these names but in uppercase and with numbers to represent the 
dimensions, for example, @code{CRPIX1} and @code{PC2_1}.
 You can see the FITS headers with Gnuastro's @ref{Fits} program using a 
command like this: @command{$ astfits -p image.fits}.
 
 If the values given to any of these options does not correspond to the number 
of dimensions in the output dataset, then no WCS information will be added.
@@ -25973,12 +25974,12 @@ Fractions are acceptable for the values of this 
option.
 The PC matrix of the WCS rotation, see the FITS standard (link above) to 
better understand the PC matrix.
 
 @item --cunit=STR,STR
-The units of each WCS axis, for example @code{deg}.
+The units of each WCS axis, for example, @code{deg}.
 Note that these values are part of the FITS standard (link above).
 MakeProfiles will not complain if you use non-standard values, but later usage 
of them might cause trouble.
 
 @item --ctype=STR,STR
-The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
+The type of each WCS axis, for example, @code{RA---TAN} and @code{DEC--TAN}.
 Note that these values are part of the FITS standard (link above).
 MakeProfiles will not complain if you use non-standard values, but later usage 
of them might cause trouble.
 
@@ -26025,7 +26026,7 @@ If an individual image was created, this column will 
have a value of @code{1}, o
 @section MakeNoise
 
 @cindex Noise
-Real data are always buried in noise, therefore to finalize a simulation of 
real data (for example to test our observational algorithms) it is essential to 
add noise to the mock profiles created with MakeProfiles, see 
@ref{MakeProfiles}.
+Real data are always buried in noise, therefore to finalize a simulation of 
real data (for example, to test our observational algorithms) it is essential 
to add noise to the mock profiles created with MakeProfiles, see 
@ref{MakeProfiles}.
 Below, the general principles and concepts to help understand how noise is 
quantified is discussed.
 MakeNoise options and argument are then discussed in @ref{Invoking astmknoise}.
 
@@ -26066,7 +26067,7 @@ With the very accurate electronics used in today's 
detectors, photon counting no
 To understand this noise (error in counting) and its effect on the images of 
astronomical targets, let's start by reviewing how a distribution produced by 
counting can be modeled as a parametric function.
 
 Counting is an inherently discrete operation, which can only produce positive 
integer outputs (including zero).
-For example we cannot count @mymath{3.2} or @mymath{-2} of anything.
+for example, we cannot count @mymath{3.2} or @mymath{-2} of anything.
 We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
 The distribution of values, as a result of counting efforts is formally known 
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson 
distribution}.
 It is associated to Sim@'eon Denis Poisson, because he discussed it while 
working on the number of wrongful convictions in court cases in his 1837 
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous 
study by Abraham de Moivre in 1711.
@@ -26188,14 +26189,14 @@ Quoting from the GNU Scientific Library manual, ``If 
you do not own it, you shou
 @cindex Psuedo-random numbers
 @cindex Numbers, psuedo-random
 Using only software, we can only produce what is called a psuedo-random 
sequence of numbers.
-A true random number generator is a hardware (let's assume we have made sure 
it has no systematic biases), for example throwing dice or flipping coins 
(which have remained from the ancient times).
+A true random number generator is a hardware (let's assume we have made sure 
it has no systematic biases), for example, throwing dice or flipping coins 
(which have remained from the ancient times).
 More modern hardware methods use atmospheric noise, thermal noise or other 
types of external electromagnetic or quantum phenomena.
 All pseudo-random number generators (software) require a seed to be the basis 
of the generation.
 The advantage of having a seed is that if you specify the same seed for 
multiple runs, you will get an identical sequence of random numbers which 
allows you to reproduce the same final noised image.
 
 @cindex Environment variables
 @cindex GNU Scientific Library
-The programs in GNU Astronomy Utilities (for example MakeNoise or 
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
+The programs in GNU Astronomy Utilities (for example, MakeNoise or 
MakeProfiles) use the GNU Scientific Library (GSL) to generate random numbers.
 GSL allows the user to set the random number generator through environment 
variables, see @ref{Installation directory} for an introduction to environment 
variables.
 In the chapter titled ``Random Number Generation'' they have fully explained 
the various random number generators that are available (there are a lot of 
them!).
 Through the two environment variables @code{GSL_RNG_TYPE} and 
@code{GSL_RNG_SEED} you can specify the generator and its seed respectively.
@@ -26359,7 +26360,7 @@ This option will be most useful if the input images 
were of integer types.
 @node High-level calculations, Installed scripts, Data modeling, Top
 @chapter High-level calculations
 
-After the reduction of raw data (for example with the programs in @ref{Data 
manipulation}) you will have reduced images/data ready for processing/analyzing 
(for example with the programs in @ref{Data analysis}).
+After the reduction of raw data (for example, with the programs in @ref{Data 
manipulation}) you will have reduced images/data ready for processing/analyzing 
(for example, with the programs in @ref{Data analysis}).
 But the processed/analyzed data (or catalogs) are still not enough to derive 
any scientific result.
 Even higher-level analysis is still needed to convert the observed magnitudes, 
sizes or volumes into physical quantities that we associate with each catalog 
entry or detected object which is the purpose of the tools in this section.
 
@@ -26392,7 +26393,7 @@ There are many books thoroughly deriving and proving 
all the equations with all
 @node Distance on a 2D curved space, Extending distance concepts to 3D, 
CosmicCalculator, CosmicCalculator
 @subsection Distance on a 2D curved space
 
-The observations to date (for example the Planck 2015 results), have not 
measured@footnote{The observations are interpreted under the assumption of 
uniform curvature.
+The observations to date (for example, the Planck 2015 results), have not 
measured@footnote{The observations are interpreted under the assumption of 
uniform curvature.
 For a relativistic alternative to dark energy (and maybe also some part of 
dark matter), non-uniform curvature may be even be more critical, but that is 
beyond the scope of this brief explanation.} the presence of significant 
curvature in the universe.
 However to be generic (and allow its measurement if it does in fact exist), it 
is very important to create a framework that allows non-zero uniform curvature.
 However, this section is not intended to be a fully thorough and 
mathematically complete derivation of these concepts.
@@ -26620,7 +26621,7 @@ $ astcosmiccal -l0.7 -m0.3 -z2.1
 $ astcosmiccal --obsline=lyalpha,4000 --listlinesatz
 @end example
 
-The input parameters (for example current matter density, etc) can be given as 
command-line options or in the configuration files, see @ref{Configuration 
files}.
+The input parameters (for example, current matter density, etc) can be given 
as command-line options or in the configuration files, see @ref{Configuration 
files}.
 For a definition of the different parameters, please see the sections prior to 
this.
 If no redshift is given, CosmicCalculator will just print its input parameters 
and abort.
 For a full list of the input options, please see @ref{CosmicCalculator input 
options}.
@@ -26687,7 +26688,7 @@ Wavelengths are assumed to be in Angstroms.
 The first argument identifies the line.
 It can be one of the standard names below, or any rest-frame wavelength in 
Angstroms.
 The second argument is the observed wavelength of that line.
-For example @option{--obsline=lyalpha,6000} is the same as 
@option{--obsline=1215.64,6000}.
+for example, @option{--obsline=lyalpha,6000} is the same as 
@option{--obsline=1215.64,6000}.
 
 The pre-defined names are listed below, sorted from red (longer wavelength) to 
blue (shorter wavelength).
 You can get this list on the command-line with the @option{--listlines}.
@@ -26862,7 +26863,7 @@ You can get this list on the command-line with the 
@option{--listlines}.
 @subsubsection CosmicCalculator basic cosmology calculations
 By default, when no specific calculations are requested, CosmicCalculator will 
print a complete set of all its calculators (one line for each calculation, see 
@ref{Invoking astcosmiccal}).
 The full list of calculations can be useful when you do not want any specific 
value, but just a general view.
-In other contexts (for example in a batch script or during a discussion), you 
know exactly what you want and do not want to be distracted by all the extra 
information.
+In other contexts (for example, in a batch script or during a discussion), you 
know exactly what you want and do not want to be distracted by all the extra 
information.
 
 You can use any number of the options described below in any order.
 When any of these options are requested, CosmicCalculator's output will just 
be a single line with a single space between the (possibly) multiple values.
@@ -26885,7 +26886,7 @@ vol=$(astcosmiccal --redshift=$z --volume)
 In a script, this operation might be necessary for a large number of objects 
(several of galaxies in a catalog for example).
 So the fact that all the other default calculations are ignored will also help 
you get to your result faster.
 
-If you are indeed dealing with many (for example thousands) of redshifts, 
using CosmicCalculator is not the best/fastest solution.
+If you are indeed dealing with many (for example, thousands) of redshifts, 
using CosmicCalculator is not the best/fastest solution.
 Because it has to go through all the configuration files and preparations for 
each invocation.
 To get the best efficiency (least overhead), we recommend using Gnuastro's 
cosmology library (see @ref{Cosmology library}).
 CosmicCalculator also calls the library functions defined there for its 
calculations, so you get the same result with no overhead.
@@ -26909,7 +26910,7 @@ In case you have forgot the units, you can use the 
@option{--help} option which
 @itemx --usedredshift
 The redshift that was used in this run.
 In many cases this is the main input parameter to CosmicCalculator, but it is 
useful in others.
-For example in combination with @option{--obsline} (where you give an observed 
and rest-frame wavelength and would like to know the redshift) or with 
@option{--velocity} (where you specify the velocity instead of redshift).
+for example, in combination with @option{--obsline} (where you give an 
observed and rest-frame wavelength and would like to know the redshift) or with 
@option{--velocity} (where you specify the velocity instead of redshift).
 Another example is when you run CosmicCalculator in a loop, while changing the 
redshift and you want to keep the redshift value with the resulting calculation.
 
 @item -Y
@@ -26983,7 +26984,7 @@ At different redshifts, observed spectral lines are 
shifted compared to their re
 Although this relation is very simple and can be done for one line in the head 
(or a simple calculator!), it slowly becomes tiring when dealing with a lot of 
lines or redshifts, or some precision is necessary.
 The options in this section are thus provided to greatly simplify usage of 
this simple equation, and also helping by storing a list of pre-defined 
spectral line wavelengths.
 
-For example if you want to know the wavelength of the @mymath{H\alpha} line 
(at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000 
Angstroms, you can call CosmicCalculator like the first example below.
+for example, if you want to know the wavelength of the @mymath{H\alpha} line 
(at 6562.8 Angstroms in rest frame), when @mymath{Ly\alpha} is at 8000 
Angstroms, you can call CosmicCalculator like the first example below.
 And if you want the wavelength of all pre-defined spectral lines at this 
redshift, you can use the second command.
 
 @example
@@ -27001,7 +27002,7 @@ List the pre-defined rest frame spectral line 
wavelengths and their names on sta
 When this option is given, other operations on the command-line will be 
ignored.
 This is convenient when you forget the specific name of the spectral line used 
within Gnuastro, or when you forget the exact wavelength of a certain line.
 
-These names can be used with the options that deal with spectral lines, for 
example @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic 
cosmology calculations}).
+These names can be used with the options that deal with spectral lines, for 
example, @option{--obsline} and @option{--lineatz} (@ref{CosmicCalculator basic 
cosmology calculations}).
 
 The format of the output list is a two-column table, with Gnuastro's text 
table format (see @ref{Gnuastro text table format}).
 Therefore, if you are only looking for lines in a specific range, you can pipe 
the output into Gnuastro's table program and use its @option{--range} option on 
the @code{wavelength} (first) column.
@@ -27013,7 +27014,7 @@ $ astcosmiccal --listlines \
 @end example
 
 @noindent
-And if you want to use the list later and have it as a table in a file, you 
can easily add the @option{--output} (or @option{-o}) option to the 
@command{asttable} command, and specify the filename, for example 
@option{--output=lines.fits} or @option{--output=lines.txt}.
+And if you want to use the list later and have it as a table in a file, you 
can easily add the @option{--output} (or @option{-o}) option to the 
@command{asttable} command, and specify the filename, for example, 
@option{--output=lines.fits} or @option{--output=lines.txt}.
 
 @item --listlinesatz
 Similar to @option{--listlines} (above), but the printed wavelength is not in 
the rest frame, but redshifted to the given redshift.
@@ -27041,7 +27042,7 @@ In the latter (when a number is given), the returned 
value is the same units of
 
 Gnuastro's programs (introduced in previous chapters) are designed to be 
highly modular and thus contain lower-level operations on the data.
 However, in many contexts, certain higher-level are also shared between many 
contexts.
-For example a sequence of calls to multiple Gnuastro programs, or a special 
way of running a program and treating the output.
+for example, a sequence of calls to multiple Gnuastro programs, or a special 
way of running a program and treating the output.
 To facilitate such higher-level data analysis, Gnuastro also installs some 
scripts on your system with the (@code{astscript-}) prefix (in contrast to the 
other programs that only have the @code{ast} prefix).
 
 @cindex GNU Bash
@@ -27055,7 +27056,7 @@ Compilation optimizes the steps into the low-level 
hardware CPU instructions/lan
 Because compiled programs do not need an interpreter like Bash on every run, 
they are much faster and more independent than scripts.
 To read the source code of the programs, look into the @file{bin/progname} 
directory of Gnuastro's source (@ref{Downloading the source}).
 If you would like to read more about why C was chosen for the programs, please 
see @ref{Why C}.}.
-For example with this command (just replace @code{nano} with your favorite 
text editor, like @command{emacs} or @command{vim}):
+for example, with this command (just replace @code{nano} with your favorite 
text editor, like @command{emacs} or @command{vim}):
 
 @example
 $ nano $(which astscript-NAME)
@@ -27086,8 +27087,8 @@ The output of @option{--help} is not configurable like 
the programs (see @ref{--
 @item
 @cindex GNU AWK
 @cindex GNU SED
-The scripts will commonly use your installed shell and other basic 
command-line tools (for example AWK or SED).
-Different systems have different versions and implementations of these basic 
tools (for example GNU/Linux systems use GNU Bash, GNU AWK and GNU SED which 
are far more advanced and up to date then the minimalist AWK and SED of most 
other systems).
+The scripts will commonly use your installed shell and other basic 
command-line tools (for example, AWK or SED).
+Different systems have different versions and implementations of these basic 
tools (for example, GNU/Linux systems use GNU Bash, GNU AWK and GNU SED which 
are far more advanced and up to date then the minimalist AWK and SED of most 
other systems).
 Therefore, unexpected errors in these tools might come up when you run these 
scripts on non-GNU/Linux operating systems.
 If you do confront such strange errors, please submit a bug report so we fix 
it as soon as possible (see @ref{Report a bug}).
 
@@ -27113,7 +27114,7 @@ However, the FITS standard's date format 
(@code{YYYY-MM-DDThh:mm:ss.ddd}) is bas
 Dates that are stored in this format are complicated for automatic processing: 
a night starts in the final hours of one calendar day, and extends to the early 
hours of the next calendar day.
 As a result, to identify datasets from one night, we commonly need to search 
for two dates.
 However calendar peculiarities can make this identification very difficult.
-For example when an observation is done on the night separating two months 
(like the night starting on March 31st and going into April 1st), or two years 
(like the night starting on December 31st 2018 and going into January 1st, 
2019).
+for example, when an observation is done on the night separating two months 
(like the night starting on March 31st and going into April 1st), or two years 
(like the night starting on December 31st 2018 and going into January 1st, 
2019).
 To account for such situations, it is necessary to keep track of how many days 
are in a month, and leap years, etc.
 
 @cindex Unix epoch time
@@ -27131,8 +27132,8 @@ Here are some examples
 If you need to copy the files, but only need a single extension (not the whole 
file), you can add a step just before the making of the symbolic links, or 
copies, and change it to only copy a certain extension of the FITS file using 
the Fits program's @option{--copy} option, see @ref{HDU information and 
manipulation}.
 
 @item
-If you need to classify the files with finer detail (for example the purpose 
of the dataset), you can add a step just before the making of the symbolic 
links, or copies, to specify a file-name prefix based on other certain keyword 
values in the files.
-For example when the FITS files have a keyword to specify if the dataset is a 
science, bias, or flat-field image.
+If you need to classify the files with finer detail (for example, the purpose 
of the dataset), you can add a step just before the making of the symbolic 
links, or copies, to specify a file-name prefix based on other certain keyword 
values in the files.
+for example, when the FITS files have a keyword to specify if the dataset is a 
science, bias, or flat-field image.
 You can read it and to add a @code{sci-}, @code{bias-}, or @code{flat-} to the 
created file (after the @option{--prefix}) automatically.
 
 For example, let's assume the observing mode is stored in the hypothetical 
@code{MODE} keyword, which can have three values of @code{BIAS-IMAGE}, 
@code{SCIENCE-IMAGE} and @code{FLAT-EXP}.
@@ -27207,7 +27208,7 @@ $ ls
 img-n1-1.fits img-n1-2.fits img-n2-1.fits ...
 @end example
 
-The outputs can be placed in a different (already existing) directory by 
including that directory's name in the @option{--prefix} value, for example 
@option{--prefix=sorted/img-} will put them all under the @file{sorted} 
directory.
+The outputs can be placed in a different (already existing) directory by 
including that directory's name in the @option{--prefix} value, for example, 
@option{--prefix=sorted/img-} will put them all under the @file{sorted} 
directory.
 
 This script can be configured like all Gnuastro's programs (through 
command-line options, see @ref{Common options}), with some minor differences 
that are described in @ref{Installed scripts}.
 The particular options to this script are listed below:
@@ -27226,7 +27227,7 @@ The keyword name that contains the FITS date format to 
classify/sort by.
 @itemx --hour=FLT
 The hour that defines the next ``night''.
 By default, all times before 11:00a.m are considered to belong to the previous 
calendar night.
-If a sub-hour value is necessary, it should be given in units of hours, for 
example @option{--hour=9.5} corresponds to 9:30a.m.
+If a sub-hour value is necessary, it should be given in units of hours, for 
example, @option{--hour=9.5} corresponds to 9:30a.m.
 
 @cartouche
 @noindent
@@ -27240,7 +27241,7 @@ It is possible to take this into account by setting the 
@option{--hour} option t
 
 For example, consider a set of images taken in Auckland (New Zealand, UTC+12) 
during different nights.
 If you want to classify these images by night, you have to know at which time 
(in UTC time) the Sun rises (or any other separator/definition of a different 
night).
-For example if your observing night finishes before 9:00a.m in Auckland, you 
can use @option{--hour=21}.
+for example, if your observing night finishes before 9:00a.m in Auckland, you 
can use @option{--hour=21}.
 Because in Auckland the local time of 9:00 corresponds to 21:00 UTC.
 @end cartouche
 
@@ -27280,7 +27281,7 @@ See the description of @option{--link} for how it is 
used.
 For example, with @option{--prefix=img-}, all the created file names in the 
current directory will start with @code{img-}, making outputs like 
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
 
 @option{--prefix} can also be used to store the links/copies in another 
directory relative to the directory this script is being run (it must already 
exist).
-For example @code{--prefix=/path/to/processing/img-} will put all the 
links/copies in the @file{/path/to/processing} directory, and the files (in 
that directory) will all start with @file{img-}.
+for example, @code{--prefix=/path/to/processing/img-} will put all the 
links/copies in the @file{/path/to/processing} directory, and the files (in 
that directory) will all start with @file{img-}.
 
 @item --stdintimeout=INT
 Number of micro-seconds to wait for standard input within this script.
@@ -27471,13 +27472,13 @@ By default, it is @option{--positionangle=0}, which 
means that the semi-major ax
 Limit the profile to the given azimuthal angle range (two numbers given to 
this option, in degrees, from 0 to 360) from the major axis (defined by 
@option{--positionangle}).
 The radial profile will therefore be created on a wedge-like shape, not the 
full circle/ellipse.
 
-If the first angle is @emph{smaller} than the second (for example 
@option{--azimuth=10,80}), the region between, or @emph{inside}, the two angles 
will be used.
-Otherwise (for example @option{--azimuth=80,10}), the region @emph{outside} 
the two angles will be used.
-The latter case can be useful when you want to ignore part of the 2D shape 
(for example due to a bright star that can be contaminating it).
+If the first angle is @emph{smaller} than the second (for example, 
@option{--azimuth=10,80}), the region between, or @emph{inside}, the two angles 
will be used.
+Otherwise (for example, @option{--azimuth=80,10}), the region @emph{outside} 
the two angles will be used.
+The latter case can be useful when you want to ignore part of the 2D shape 
(for example, due to a bright star that can be contaminating it).
 
 You can visually see the shape of the region used by running this script with 
@option{--keeptmp} and viewing the @file{values.fits} and @file{apertures.fits} 
files of the temporary directory with a FITS image viewer like @ref{SAO DS9}.
 You can use @ref{Viewing FITS file contents with DS9 or TOPCAT} to open them 
together in one instance of DS9, with both frames matched and locked (for easy 
comparison in case you want to zoom-in or out).
-For example see the commands below (based on your target object, just change 
the image name, center, position angle and etc):
+for example, see the commands below (based on your target object, just change 
the image name, center, position angle and etc):
 
 @example
 ## Generate the radial profile
@@ -27501,7 +27502,7 @@ For a full list of MakeCatalog's measurements, please 
run @command{astmkcatalog
 Multiple values can be given to this option, each separated by a comma.
 This option can also be called multiple times.
 
-Some measurements by MakeCatalog require a per-pixel sky standard deviation 
(for example magnitude error or S/N).
+Some measurements by MakeCatalog require a per-pixel sky standard deviation 
(for example, magnitude error or S/N).
 Therefore when asking for such measurements, use the @option{--instd} option 
(described below) to specify the per-pixel sky standard deviation over each 
pixel.
 For other measurements like the magnitude or surface brightness, MakeCatalog 
will need a Zeropoint, which you can set with the @option{--zeropoint} option.
 
@@ -27536,7 +27537,7 @@ When this option is called, it is passed directly to 
Crop, therefore the zero-va
 @item -v INT
 @itemx --oversample=INT
 Oversample the input dataset to the fraction given to this option.
-Therefore if you set @option{--rmax=20} for example and 
@option{--oversample=5}, your output will have 100 rows (without 
@option{--oversample} it will only have 20 rows).
+Therefore if you set @option{--rmax=20} for example, and 
@option{--oversample=5}, your output will have 100 rows (without 
@option{--oversample} it will only have 20 rows).
 Unless the object is heavily undersampled (the pixels are larger than the 
actual object), this method provides a much more accurate result and there are 
sufficient number of pixels to get the profile accurately.
 
 @item -u INT
@@ -27549,7 +27550,7 @@ This option is good to measure over a larger number of 
pixels to improve the mea
 @item -i FLT/STR
 @itemx --instd=FLT/STR
 Sky standard deviation as a single number (FLT) or as the filename (STR) 
containing the image with the std value for each pixel (the HDU within the file 
should be given to the @option{--stdhdu} option mentioned below).
-This is only necessary when the requested measurement (value given to 
@option{--measure}) by MakeCatalog needs the Standard deviation (for example 
the signal-to-noise ratio or magnitude error).
+This is only necessary when the requested measurement (value given to 
@option{--measure}) by MakeCatalog needs the Standard deviation (for example, 
the signal-to-noise ratio or magnitude error).
 If your measurements do not require a standard deviation, it is best to ignore 
this option (because it will slow down the script).
 
 @item -d INT/STR
@@ -27595,8 +27596,8 @@ For example, to check that the profiles generated for 
obtaining the radial profi
 @node SAO DS9 region files from table, Viewing FITS file contents with DS9 or 
TOPCAT, Generate radial profile, Installed scripts
 @section SAO DS9 region files from table
 
-Once your desired catalog (containing the positions of some objects) is 
created (for example with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it 
often happens that you want to see your selected objects on an image for a 
feeling of the spatial properties of your objects.
-For example you want to see their positions relative to each other.
+Once your desired catalog (containing the positions of some objects) is 
created (for example, with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it 
often happens that you want to see your selected objects on an image for a 
feeling of the spatial properties of your objects.
+for example, you want to see their positions relative to each other.
 
 In this section we describe a simple installed script that is provided within 
Gnuastro for converting your given columns to an SAO DS9 region file to help in 
this process.
 SAO DS9@footnote{@url{http://ds9.si.edu}} is one of the most common FITS image 
visualization tools in astronomy and is free software.
@@ -27663,7 +27664,7 @@ The mode can be specified with the @option{--mode} 
option, described below.
 @itemx --namecol=STR
 The column containing the name (or label) of each region.
 The type of the column (numeric or a character-based string) is irrelevant: 
you can use both types of columns as a name or label for the region.
-This feature is useful when you need to recognize each region with a certain 
ID or property (for example magnitude or redshift).
+This feature is useful when you need to recognize each region with a certain 
ID or property (for example, magnitude or redshift).
 
 @item -m wcs|img
 @itemx --mode=wcs|org
@@ -27716,7 +27717,7 @@ ds9 image.fits -zscale -regions ds9.reg
 @end example
 
 You can customize all aspects of SAO DS9 with its command-line options, 
therefore the value of this option can be as long and complicated as you like.
-For example if you also want the image to fit into the window, this option 
will be: @command{--command="ds9 image.fits -zscale -zoom to fit"}.
+for example, if you also want the image to fit into the window, this option 
will be: @command{--command="ds9 image.fits -zscale -zoom to fit"}.
 You can see the SAO DS9 command-line descriptions by clicking on the ``Help'' 
menu and selecting ``Reference Manual''.
 In the opened window, click on ``Command Line Options''.
 @end table
@@ -27727,12 +27728,12 @@ In the opened window, click on ``Command Line 
Options''.
 @node Viewing FITS file contents with DS9 or TOPCAT, PSF construction and 
subtraction, SAO DS9 region files from table, Installed scripts
 @section Viewing FITS file contents with DS9 or TOPCAT
 
-@cindex Multiextension FITS
-@cindex Opening multiextension FITS
+@cindex Multi-Extension FITS
+@cindex Opening multi-extension FITS
 The FITS definition allows for multiple extensions (or HDUs) inside one FITS 
file.
 Each HDU can have a completely independent dataset inside of it.
 One HDU can be a table, another can be an image and another can be another 
independent image.
-For example each image HDU can be one CCD of a multi-CCD camera, or in 
processed images one can be the deep science image and the next can be its 
weight map, alternatively, one HDU can be an image, and another can be the 
catalog/table of objects within it.
+for example, each image HDU can be one CCD of a multi-CCD camera, or in 
processed images one can be the deep science image and the next can be its 
weight map, alternatively, one HDU can be an image, and another can be the 
catalog/table of objects within it.
 
 The most common software for viewing FITS images is SAO DS9 (see @ref{SAO 
DS9}) and for plotting tables, TOPCAT is the most commonly used tool in 
astronomy (see @ref{TOPCAT}).
 After installing them (as described in the respective appendix linked in the 
previous sentence), you can open any number of FITS images or tables with DS9 
or TOPCAT with the commands below:
@@ -27743,10 +27744,10 @@ $ topcat table-a.fits table-b.fits
 @end example
 
 But usually the default mode is not enough.
-For example in DS9, the window can be too small (not covering the height of 
your monitor), you probably want to match and lock multiple images, you have a 
favorite color map that you prefer to use, or you may want to open a 
multi-extension FITS file as a cube.
+for example, in DS9, the window can be too small (not covering the height of 
your monitor), you probably want to match and lock multiple images, you have a 
favorite color map that you prefer to use, or you may want to open a 
multi-extension FITS file as a cube.
 
 Using the simple commands above, you need to manually do all these in the DS9 
window once it opens and this can take several tens of seconds (which is enough 
to distract you from what you wanted to inspect).
-For example if you have a multi-extension file containing 2D images, one way 
to load and switch between each 2D extension is to take the following steps in 
the SAO DS9 window: @clicksequence{``File''@click{}``Open Other''@click{}``Open 
Multi Ext Cube''} and then choose the Multi extension FITS file in your 
computer's file structure.
+for example, if you have a multi-extension file containing 2D images, one way 
to load and switch between each 2D extension is to take the following steps in 
the SAO DS9 window: @clicksequence{``File''@click{}``Open Other''@click{}``Open 
Multi Ext Cube''} and then choose the Multi extension FITS file in your 
computer's file structure.
 
 @cindex @option{-mecube} (DS9)
 The method above is a little tedious to do every time you want view a 
multi-extension FITS file.
@@ -27759,7 +27760,7 @@ This allows you to flip through the extensions easily 
while keeping all the sett
 Just to avoid confusion, note that SAO DS9 does not follow the GNU style of 
separating long and short options as explained in @ref{Arguments and options}.
 In the GNU style, this `long' (multi-character) option should have been called 
like @option{--mecube}, but SAO DS9 follows its own conventions.
 
-For example try running @command{$ds9 -mecube foo.fits} to see the effect (for 
example on the output of @ref{NoiseChisel}).
+for example, try running @command{$ds9 -mecube foo.fits} to see the effect 
(for example, on the output of @ref{NoiseChisel}).
 If the file has multiple extensions, a small window will also be opened along 
with the main DS9 window.
 This small window allows you to slide through the image extensions of 
@file{foo.fits}.
 If @file{foo.fits} only consists of one extension, then SAO DS9 will open as 
usual.
@@ -27933,19 +27934,19 @@ Predicting all the possible optimizations for all the 
possible usage scenarios w
 @end itemize
 
 Therefore, following the modularity principle of software engineering, after 
several years of working on this, we have broken the full job into the smallest 
number of independent steps as separate scripts.
-All scripts are independent of each other, meaning this that you are free to 
use all of them as you wish (for example only some of them, using another 
program for a certain step, using them for other purposes, or running 
independent parts in parallel).
+All scripts are independent of each other, meaning this that you are free to 
use all of them as you wish (for example, only some of them, using another 
program for a certain step, using them for other purposes, or running 
independent parts in parallel).
 
 For constructing the PSF from your dataset, the first step is to obtain a 
catalog of stars within it (you cannot use galaxies to build the PSF!).
 But you cannot blindly use all the stars either!
-For example we do not want contamination from other bright, and nearby objects.
+for example, we do not want contamination from other bright, and nearby 
objects.
 The first script below is therefore designed for selecting only good star 
candidates in your image.
-It will use different criteria, for example good parallax (where available, to 
avoid confusion with galaxies), not being near to bright stars, axis ratio, etc.
+It will use different criteria, for example, good parallax (where available, 
to avoid confusion with galaxies), not being near to bright stars, axis ratio, 
etc.
 For more on this script, see @ref{Invoking astscript-psf-select-stars}.
 
 Once the catalog of stars is constructed, another script is in charge of 
making appropriate stamps of the stars.
 Each stamp is a cropped image of the star with the desired size, normalization 
of the flux, and mask of the contaminant objects.
 For more on this script, see @ref{Invoking astscript-psf-stamp}
-After obtaining a set of star stamps, they can be stacked for obtaining the 
combined PSF from many stars (for example with @ref{Stacking operators}).
+After obtaining a set of star stamps, they can be stacked for obtaining the 
combined PSF from many stars (for example, with @ref{Stacking operators}).
 
 In the combined PSF, the masked background objects of each star's image will 
be covered and the signal-to-noise ratio will increase, giving a very nice view 
of the ``clean'' PSF.
 However, it is usually necessary to obtain different regions of the same PSF 
from different stars.
@@ -28177,7 +28178,7 @@ A certain width (specified by @option{--widthinpix} in 
pixels).
 Centered at the coordinate specified by the option @option{--center} (it can 
be in image/pixel or WCS coordinates, see @option{--mode}).
 If no center is specified, then it is assumed that the object of interest is 
already in the center of the image.
 @item
-If the given coordinate has sub-pixel elements (for example pixel coordinates 
1.234,4.567), the pixel grid of the output will be warped so your given 
coordinate falls in the center of the central pixel of the final output.
+If the given coordinate has sub-pixel elements (for example, pixel coordinates 
1.234,4.567), the pixel grid of the output will be warped so your given 
coordinate falls in the center of the central pixel of the final output.
 This is very important for building the central parts of the PSF, but not too 
effective for the middle or outer parts (to speed up the program in such cases, 
you can disable it with the @option{--nocentering} option).
 @item
 Normalized ``normalized'' by the value computed within the ring around the 
center (at a radial distance between the two radii specified by the option 
@option{--normradii}).
@@ -28219,7 +28220,7 @@ This parameter is used in @ref{Crop} to center and crop 
the image.
 The positions along each dimension must be separated by a comma (@key{,}).
 The units of the coordinates are read based on the value to the 
@option{--mode} option, see the examples above.
 
-The given coordinate for the central value can have sub-pixel elements (for 
example it falls on coordinate 123.4,567.8 of the input image pixel grid).
+The given coordinate for the central value can have sub-pixel elements (for 
example, it falls on coordinate 123.4,567.8 of the input image pixel grid).
 In such cases, after cropping, this script will use Gnuastro's @ref{Warp} to 
shift (or translate) the pixel grid by @mymath{-0.4} pixels along the 
horizontal and @mymath{1-0.8=0.2} pixels along the vertical.
 Finally the newly added pixels (due to the warping) will be trimmed to have 
your desired coordinate exactly in the center of the central pixel of the 
output.
 This is very important (critical!) when you are constructing the central part 
of the PSF.
@@ -28682,7 +28683,7 @@ For the details of this feature from GNU Make's own 
manual, see its @url{https:/
 Through this feature, Gnuastro provides additional Make functions that are 
useful in the context of data analysis.
 
 To use this feature, Gnuastro has to be built in shared library more.
-Gnuastro's Make extensions will not work if you build Gnuastro without shared 
libraries (for example when you configure Gnuastro with 
@option{--disable-shared} or @option{--debug}).
+Gnuastro's Make extensions will not work if you build Gnuastro without shared 
libraries (for example, when you configure Gnuastro with 
@option{--disable-shared} or @option{--debug}).
 
 @menu
 * Loading the Gnuastro Make functions::  How to find and load Gnuastro's Make 
library.
@@ -28798,7 +28799,7 @@ Will select only the FITS files (from a list of many in 
@code{FITS_FILES}, non-F
 Only the HDU given in the @code{HDU} argument will be checked.
 According to the FITS standard, the keyword name is not case sensitive, but 
the keyword value is.
 
-For example if you have many FITS files in the @file{/datasets/images} 
directory, the minimal Makefile below will put those with a value of @code{BAR} 
or @code{BAZ} for the @code{FOO} keyword in HDU number @code{1} in the 
@code{selected} Make variable.
+for example, if you have many FITS files in the @file{/datasets/images} 
directory, the minimal Makefile below will put those with a value of @code{BAR} 
or @code{BAZ} for the @code{FOO} keyword in HDU number @code{1} in the 
@code{selected} Make variable.
 Notice how there is no comma between @code{BAR} and @code{BAZ}: you can 
specify any series of values.
 
 @verbatim
@@ -28823,7 +28824,7 @@ keyvalues := $(ast-fits-unique-keyvalues FOO, 1, 
$(files))
 
 This is useful when you do not know the full range of values a-priori.
 For example, let's assume that you are looking at a night's observations with 
a telescope and the purpose of the FITS image is written in the @code{OBJECT} 
keyword of the image (which we can assume is in HDU number 1).
-This keyword can have the name of the various science targets (for example 
@code{NGC123} and @code{M31}) and calibration targets (for example @code{BIAS} 
and @code{FLAT}).
+This keyword can have the name of the various science targets (for example, 
@code{NGC123} and @code{M31}) and calibration targets (for example, @code{BIAS} 
and @code{FLAT}).
 The list of science targets is different from project to project, such that in 
one night, you can observe multiple projects.
 But the calibration frames have unique names.
 Knowing the calibration keyword values, you can extract the science keyword 
values of the night with the command below (feeding the outupt of this function 
to Make's @code{filter-out} function).
@@ -28908,7 +28909,7 @@ The more modular the source code of a program or 
library, the easier maintaining
 @subsection Headers
 
 @cindex Pre-Processor
-C source code is read from top to bottom in the source file, therefore program 
components (for example variables, data structures and functions) should all be 
@emph{defined} or @emph{declared} closer to the top of the source file: before 
they are used.
+C source code is read from top to bottom in the source file, therefore program 
components (for example, variables, data structures and functions) should all 
be @emph{defined} or @emph{declared} closer to the top of the source file: 
before they are used.
 @emph{Defining} something in C or C++ is jargon for providing its full details.
 @emph{Declaring} it, on the other-hand, is jargon for only providing the 
minimum information needed for the compiler to pass it temporarily and fill in 
the detailed definition later.
 
@@ -28942,7 +28943,7 @@ The declaration (address) is enough.
 Therefore by @emph{declaring} the functions once at the start of the source 
files, we do not have to worry about @emph{defining} them after they are used.
 
 Even for a simple real-world operation (not a simple summation like above!), 
you will soon need many functions (for example, some for reading/preparing the 
inputs, some for the processing, and some for preparing the output).
-Although it is technically possible, managing all the necessary functions in 
one file is not easy and is contrary to the modularity principle (see 
@ref{Review of library fundamentals}), for example the functions for preparing 
the input can be usable in your other projects with a different processing.
+Although it is technically possible, managing all the necessary functions in 
one file is not easy and is contrary to the modularity principle (see 
@ref{Review of library fundamentals}), for example, the functions for preparing 
the input can be usable in your other projects with a different processing.
 Therefore, as we will see later (in @ref{Linking}), the functions do not 
necessarily need to be defined in the source file where they are used.
 As long as their definitions are ultimately linked to the final executable, 
everything will be fine.
 For now, it is just important to remember that the functions that are called 
within one source file must be declared within the source file (declarations 
are mandatory), but not necessarily defined there.
@@ -28967,7 +28968,7 @@ Pre-processor macros (or macros for short) are replaced 
with their defined value
 Conventionally they are written only in capital letters to be easily 
recognized.
 It is just important to understand that the compiler does not see the macros, 
it sees their fixed values.
 So when a header specifies macros you can do your programming without worrying 
about the actual values.
-The standard C types (for example @code{int}, or @code{float}) are very 
low-level and basic.
+The standard C types (for example, @code{int}, or @code{float}) are very 
low-level and basic.
 We can collect multiple C types into a @emph{structure} for a higher-level way 
to keep and pass-along data.
 See @ref{Generic data container} for some examples of macros and data 
structures.
 
@@ -29014,11 +29015,11 @@ See @ref{Known issues} for an example of how to set 
this variable at configure t
 
 @cindex GNU build system
 As described in @ref{Installation directory}, you can select the top 
installation directory of a software using the GNU build system, when you 
@command{./configure} it.
-All the separate components will be put in their separate sub-directory under 
that, for example the programs, compiled libraries and library headers will go 
into @file{$prefix/bin} (replace @file{$prefix} with a directory), 
@file{$prefix/lib}, and @file{$prefix/include} respectively.
+All the separate components will be put in their separate sub-directory under 
that, for example, the programs, compiled libraries and library headers will go 
into @file{$prefix/bin} (replace @file{$prefix} with a directory), 
@file{$prefix/lib}, and @file{$prefix/include} respectively.
 For enhanced modularity, libraries that contain diverse collections of 
functions (like GSL, WCSLIB, and Gnuastro), put their header files in a 
sub-directory unique to themselves.
-For example all Gnuastro's header files are installed in 
@file{$prefix/include/gnuastro}.
-In your source code, you need to keep the library's sub-directory when 
including the headers from such libraries, for example @code{#include 
<gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually 
known to the compiler}.
-Not all libraries need to follow this convention, for example CFITSIO only has 
one header (@file{fitsio.h}) which is directly installed in 
@file{$prefix/include}.
+for example, all Gnuastro's header files are installed in 
@file{$prefix/include/gnuastro}.
+In your source code, you need to keep the library's sub-directory when 
including the headers from such libraries, for example, @code{#include 
<gnuastro/fits.h>}@footnote{the top @file{$prefix/include} directory is usually 
known to the compiler}.
+Not all libraries need to follow this convention, for example, CFITSIO only 
has one header (@file{fitsio.h}) which is directly installed in 
@file{$prefix/include}.
 
 
 
@@ -29037,14 +29038,14 @@ Afterwards, the object files are @emph{linked} 
together to create an executable
 @cindex GNU Binutils
 The object files contain the full definition of the functions in the 
respective @file{.c} file along with a list of any other function (or generally 
``symbol'') that is referenced there.
 To get a list of those functions you can use the @command{nm} program which is 
part of GNU Binutils.
-For example from the top Gnuastro directory, run:
+for example, from the top Gnuastro directory, run:
 
 @example
 $ nm bin/arithmetic/arithmetic.o
 @end example
 
 @noindent
-This will print a list of all the functions (more generally, `symbols') that 
were called within @file{bin/arithmetic/arithmetic.c} along with some further 
information (for example a @code{T} in the second column shows that this 
function is actually defined here, @code{U} says that it is undefined here).
+This will print a list of all the functions (more generally, `symbols') that 
were called within @file{bin/arithmetic/arithmetic.c} along with some further 
information (for example, a @code{T} in the second column shows that this 
function is actually defined here, @code{U} says that it is undefined here).
 Try opening the @file{.c} file to check some of these functions for yourself. 
Run @command{info nm} for more information.
 
 @cindex Linking
@@ -29136,7 +29137,7 @@ $ ldd /usr/local/bin/astarithmetic
 @end example
 
 Library file names (in their installation directory) start with a @file{lib} 
and their ending (suffix) shows if they are static (@file{.a}) or dynamic 
(@file{.so}), as described below.
-The name of the library is in the middle of these two, for example 
@file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), 
and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's 
shared library, the numbers may be different).
+The name of the library is in the middle of these two, for example, 
@file{libgsl.a} or @file{libgnuastro.a} (GSL and Gnuastro's static libraries), 
and @file{libgsl.so.23.0.0} or @file{libgnuastro.so.4.0.0} (GSL and Gnuastro's 
shared library, the numbers may be different).
 
 @itemize
 @item
@@ -29174,7 +29175,7 @@ You can see these options and their usage in practice 
while building Gnuastro (a
 @table @option
 @item -L DIR
 Will tell the linker to look into @file{DIR} for the libraries.
-For example @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
+for example, @file{-L/usr/local/lib}, or @file{-L/home/yourname/.local/lib}.
 You can make multiple calls to this option, so the linker looks into several 
directories at compilation time.
 Note that the space between @key{L} and the directory is optional and commonly 
ignored (written as @option{-LDIR}).
 
@@ -29354,8 +29355,8 @@ Once your program grows and you break it up into 
multiple files (which are much
 @cindex GCC: GNU Compiler Collection
 @cindex GNU Compiler Collection (GCC)
 C compiler to use for the compilation, if not given environment variables will 
be used as described in the next paragraph.
-If the compiler is in your systems's search path, you can simply give its 
name, for example @option{--cc=gcc}.
-If it is not in your system's search path, you can give its full path, for 
example @option{--cc=/path/to/your/custom/cc}.
+If the compiler is in your systems's search path, you can simply give its 
name, for example, @option{--cc=gcc}.
+If it is not in your system's search path, you can give its full path, for 
example, @option{--cc=/path/to/your/custom/cc}.
 
 If this option has no value after parsing the command-line and all 
configuration files (see @ref{Configuration file precedence}), then 
BuildProgram will look into the following environment variables in the given 
order @code{CC} and @code{GCC}.
 If they are also not defined, BuildProgram will ultimately default to the 
@command{gcc} command which is present in many systems (sometimes as a link to 
other compilers).
@@ -29612,7 +29613,7 @@ see @ref{Implementation of pthread_barrier} for more.
 @deffnx Macro GAL_CONFIG_SIZEOF_SIZE_T
 The size of (number of bytes in) the system's @code{long} and @code{size_t} 
types.
 Their values are commonly either 4 or 8 for 32-bit and 64-bit systems.
-You can also get this value with the expression `@code{sizeof size_t}' for 
example without having to include this header.
+You can also get this value with the expression `@code{sizeof size_t}' for 
example, without having to include this header.
 @end deffn
 
 @deffn Macro GAL_CONFIG_HAVE_LIBGIT2
@@ -29889,8 +29890,8 @@ are fixed width types, these types are portable to all 
systems and are
 defined in the standard C header @file{stdint.h}. You do not need to include
 this header, it is included by any Gnuastro header that deals with the
 different types. However, the most commonly used types in a C (or C++)
-program (for example @code{int} or @code{long}) are not defined by their
-exact width (storage size), but by their minimum storage. So for example on
+program (for example, @code{int} or @code{long}) are not defined by their
+exact width (storage size), but by their minimum storage. So for example, on
 some systems, @code{int} may be 2 bytes (16-bits, the minimum required by
 the standard) and on others it may be 4 bytes (32-bits, common in modern
 systems).
@@ -30021,7 +30022,7 @@ For strings, this function will return the size of 
@code{char *}.
 
 @deftypefun {char *} gal_type_name (uint8_t @code{type}, int @code{long_name})
 Return a string literal that contains the name of @code{type}.
-It can return both short and long formats of the type names (for example 
@code{f32} and @code{float32}).
+It can return both short and long formats of the type names (for example, 
@code{f32} and @code{float32}).
 If @code{long_name} is non-zero, the long format will be returned, otherwise 
the short name will be returned.
 The output string is statically allocated, so it should not be freed.
 This function is the inverse of the @code{gal_type_from_name} function.
@@ -30132,7 +30133,7 @@ Note that when we are dealing with a string type, 
@code{*out} should be interpre
 In other words, @code{out} should be @code{char ***}.
 
 This function can be used to fill in arrays of numbers from strings (in an 
already allocated data structure), or add nodes to a linked list (if the type 
is a list type).
-For an array, you have to pass the pointer to the @code{i}th element where you 
want the value to be stored, for example @code{&(array[i])}.
+For an array, you have to pass the pointer to the @code{i}th element where you 
want the value to be stored, for example, @code{&(array[i])}.
 
 If the string was successfully parsed to the requested type, this function 
will return a @code{0} (zero), otherwise it will return @code{1} (one).
 This output format will help you check the status of the conversion in a code 
like the example below where we will try reading a string as a single precision 
floating point number.
@@ -30156,17 +30157,17 @@ Read @code{string} into smallest type that can host 
the number, the allocated sp
 If @code{string} could not be read as a number, this function will return 
@code{NULL}.
 
 This function first calls the C library's @code{strtod} function to read 
@code{string} as a double-precision floating point number.
-When successful, it will check the value to put it in the smallest numerical 
data type that can handle it: for example @code{120} and @code{50000} will be 
read as a signed 8-bit integer and unsigned 16-bit integer types.
+When successful, it will check the value to put it in the smallest numerical 
data type that can handle it: for example, @code{120} and @code{50000} will be 
read as a signed 8-bit integer and unsigned 16-bit integer types.
 When reading as an integer, the C library's @code{strtol} function is used (in 
base-10) to parse the string again.
-This re-parsing as an integer is necessary because integers with many digits 
(for example the Unix epoch seconds) will not be accurately stored as a 
floating point and we cannot use the result of @code{strtod}.
+This re-parsing as an integer is necessary because integers with many digits 
(for example, the Unix epoch seconds) will not be accurately stored as a 
floating point and we cannot use the result of @code{strtod}.
 
 When @code{string} is successfully parsed as a number @emph{and} there is 
@code{.} in @code{string}, it will force the number into floating point types.
-For example @code{"5"} is read as an integer, while @code{"5."} or 
@code{"5.0"}, or @code{"5.00"} will be read as a floating point 
(single-precision).
+for example, @code{"5"} is read as an integer, while @code{"5."} or 
@code{"5.0"}, or @code{"5.00"} will be read as a floating point 
(single-precision).
 
 For floating point types, this function will count the number of significant 
digits and determine if the given string is single or double precision as 
described in @ref{Numeric data types}.
 
 For integers, negative numbers will always be placed in signed types (as 
expected).
-If a positive integer falls below the maximum of a signed type of a certain 
width, it will be signed (for example @code{10} and @code{150} will be defined 
as a signed and unsigned 8-bit integer respectively).
+If a positive integer falls below the maximum of a signed type of a certain 
width, it will be signed (for example, @code{10} and @code{150} will be defined 
as a signed and unsigned 8-bit integer respectively).
 In other words, even though @code{10} can be unsigned, it will be read as a 
signed 8-bit integer.
 This is done to respect the C implicit type conversion in binary operators, 
where signed integers will be interpreted as unsigned, when the other operand 
is an unsigned integer of the same width.
 
@@ -30234,7 +30235,7 @@ If @code{clear!=0}, then the allocated space is set to 
zero (cleared).
 This is effectively just a wrapper around C's @code{malloc} or @code{calloc} 
functions but takes Gnuastro's integer type codes and will also abort with a 
clear error if there the allocation was not successful.
 The number of allocated bytes is the value given to @code{size} that is 
multiplied by the returned value of @code{gal_type_sizeof} for the given type.
 So if you want to allocate space for an array of strings you should pass the 
type @code{GAL_TYPE_STRING}.
-Otherwise, if you just want space for one string (for example 6 bytes for 
@code{hello}, including the string-termination character), you should set the 
type @code{GAL_TYPE_UINT8}.
+Otherwise, if you just want space for one string (for example, 6 bytes for 
@code{hello}, including the string-termination character), you should set the 
type @code{GAL_TYPE_UINT8}.
 
 @cindex C99
 When space cannot be allocated, this function will abort the program with a 
message containing the reason for the failure.
@@ -30279,9 +30280,9 @@ If @code{quietmmap} is non-zero, then a warning will be 
printed for the user to
 
 @node Library blank values, Library data container, Pointers, Gnuastro library
 @subsection Library blank values (@file{blank.h})
-When the position of an element in a dataset is important (for example a
+When the position of an element in a dataset is important (for example, a
 pixel in an image), a place-holder is necessary for the element if we do not
-have a value to fill it with (for example the CCD cannot read those
+have a value to fill it with (for example, the CCD cannot read those
 pixels). We cannot simply shift all the other pixels to fill in the one we
 have no value for. In other cases, it often occurs that the field of sky
 that you are studying is not a clean rectangle to nicely fit into the
@@ -30387,7 +30388,7 @@ The functions below can be used to work with blank 
pixels.
 
 @deftypefun void gal_blank_write (void @code{*pointer}, uint8_t @code{type})
 Write the blank value for the given @code{type} into the space that 
@code{pointer} points to.
-This can be used when the space is already allocated (for example one element 
in an array or a statically allocated variable).
+This can be used when the space is already allocated (for example, one element 
in an array or a statically allocated variable).
 @end deftypefun
 
 @deftypefun {void *} gal_blank_alloc_write (uint8_t @code{type})
@@ -30436,7 +30437,7 @@ the dataset in future calls. When 
@code{updateflags==0}, this function has
 no side-effects on the dataset: it will not toggle the flags.
 
 If you want to re-check a dataset with the blank-value-check flag already
-set (for example if you have made changes to it), then explicitly set the
+set (for example, if you have made changes to it), then explicitly set the
 @code{GAL_DATA_FLAG_BLANK_CH} bit to zero before calling this
 function. When there are no other flags, you can just set the flags to zero
 (@code{input->flag=0}), otherwise you can use this expression:
@@ -30506,14 +30507,14 @@ In any case (no matter which columns are checked for 
blanks), the selected rows
 @node Library data container, Dimensions, Library blank values, Gnuastro 
library
 @subsection Data container (@file{data.h})
 
-Astronomical datasets have various dimensions, for example 1D spectra or
+Astronomical datasets have various dimensions, for example, 1D spectra or
 table columns, 2D images, or 3D Integral field data cubes. Datasets can
 also have various numeric data types, depending on the operation/purpose,
-for example processed images are commonly stored in floating point format,
+for example, processed images are commonly stored in floating point format,
 but their mask images are integers (allowing bit-wise flags to identify
 certain classes of pixels to keep or mask, see @ref{Numeric data
 types}). Certain other information about a dataset are also commonly
-necessary, for example the units of the dataset, the name of the dataset
+necessary, for example, the units of the dataset, the name of the dataset
 and some comments. To deal with any generic dataset, Gnuastro defines the
 @code{gal_data_t} as input or output.
 
@@ -30686,7 +30687,7 @@ But upon initialization, all bits also get a value of 
zero.
 Therefore, a checker needs this flag to see if the value in 
@code{GAL_DATA_FLAG_HASBLANK} is reliable (dataset has actually been parsed for 
a blank value) or not.
 
 Also, if it is necessary to re-check the presence of flags, you just have
-to set this flag to zero and call @code{gal_blank_present} for example to
+to set this flag to zero and call @code{gal_blank_present} for example, to
 parse the dataset and check for blank values. Note that for improved
 efficiency, when this flag is set, @code{gal_blank_present} will not
 actually parse the dataset, it will just use @code{GAL_DATA_FLAG_HASBLANK}.
@@ -30739,7 +30740,7 @@ be the column name. If it is set to @code{NULL} (by 
default), it will be
 ignored.
 
 @item char *unit
-The units of the dataset (for example @code{BUNIT} in the standard FITS
+The units of the dataset (for example, @code{BUNIT} in the standard FITS
 keywords) that will be read from or written to files/tables along with the
 dataset. If it is set to @code{NULL} (by default), it will be ignored.
 
@@ -30764,7 +30765,7 @@ output}. Based on C's @code{printf} standards.
 
 @item gal_data_t *next
 Through this pointer, you can link a @code{gal_data_t} with other datasets
-related datasets, for example the different columns in a dataset each have
+related datasets, for example, the different columns in a dataset each have
 one @code{gal_data_t} associate with them and they are linked to each other
 using this element. There are several functions described below to
 facilitate using @code{gal_data_t} as a linked list. See @ref{Linked lists}
@@ -30884,7 +30885,7 @@ container}.
 
 @deftypefun {gal_data_t **} gal_data_array_ptr_calloc (size_t @code{size})
 Allocate an array of pointers to Gnuastro's generic data structure and 
initialize all pointers to @code{NULL}.
-This is useful when you want to allocate individual datasets later (for 
example with @code{gal_data_alloc}).
+This is useful when you want to allocate individual datasets later (for 
example, with @code{gal_data_alloc}).
 @end deftypefun
 
 @deftypefun void gal_data_array_ptr_free (gal_data_t @code{**dataptr}, size_t 
@code{size}, int @code{free_array});
@@ -30958,7 +30959,7 @@ there must also be one coordinate for each dimension.
 
 The functions and macros in this section provide you with the tools to
 convert an index into a coordinate and vice-versa along with several other
-issues for example issues with the neighbors of an element in a
+issues for example, issues with the neighbors of an element in a
 multi-dimensional context.
 
 @deftypefun size_t gal_dimension_total_size (size_t @code{ndim}, size_t 
@code{*dsize})
@@ -30974,7 +30975,7 @@ dimensions of the two datasets are different.
 
 @deftypefun {size_t *} gal_dimension_increment (size_t @code{ndim}, size_t 
@code{*dsize})
 Return an allocated array that has the number of elements necessary to
-increment an index along every dimension. For example along the fastest
+increment an index along every dimension. for example, along the fastest
 dimension (last element in the @code{dsize} and returned arrays), the value
 is @code{1} (one).
 @end deftypefun
@@ -31136,7 +31137,7 @@ Most distant neighbors to consider.
 Depending on the number of dimensions, different neighbors may be defined for 
each element.
 This function-like macro distinguish between these different neighbors with 
this argument.
 It has a value between @code{1} (one) and @code{ndim}.
-For example in a 2D dataset, 4-connected neighbors have a connectivity of 
@code{1} and 8-connected neighbors have a connectivity of @code{2}.
+for example, in a 2D dataset, 4-connected neighbors have a connectivity of 
@code{1} and 8-connected neighbors have a connectivity of @code{2}.
 Note that this is inclusive, so in this example, a connectivity of @code{2} 
will also include connectivity @code{1} neighbors.
 @item dinc
 An array keeping the length necessary to increment along each dimension.
@@ -31167,15 +31168,15 @@ This macro works fully within its own @code{@{@}} 
block and except for the @code
 An array is a contiguous region of memory that is very efficient and easy
 to use for recording and later accessing any random element as fast as any
 other. This makes array the primary data container when you have many
-elements (for example an image which has millions of pixels). One major
+elements (for example, an image which has millions of pixels). One major
 problem with an array is that the number of elements that go into it must
 be known in advance and adding or removing an element will require a re-set
-of all the other elements. For example if you want to remove the 3rd
+of all the other elements. for example, if you want to remove the 3rd
 element in a 1000 element array, all 997 subsequent elements have to pulled
 back by one position, the reverse will happen if you need to add an
 element.
 
-In many contexts such situations never come up, for example you do not want
+In many contexts such situations never come up, for example, you do not want
 to shift all the pixels in an image by one or two pixels from some random
 position in the image: their positions have scientific value. But in other
 contexts you will find yourself frequently adding/removing an a-priori
@@ -31315,7 +31316,7 @@ Print the strings within each node of @code{*list} on 
the standard outputin the
 Each string is printed on one line.
 This function is mainly good for checking/debugging your program.
 For program outputs, it is best to make your own implementation with a better, 
more user-friendly, format.
-For example the following code snippet.
+for example, the following code snippet.
 
 @example
 size_t i;
@@ -31408,7 +31409,7 @@ Print the integers within each node of @code{*list} on 
the standard output
 in the same order that they are stored. Each integer is printed on one
 line. This function is mainly good for checking/debugging your program. For
 program outputs, it is best to make your own implementation with a better,
-more user-friendly format. For example the following code snippet. You can
+more user-friendly format. for example, the following code snippet. You can
 also modify it to print all values in one line, etc, depending on the
 context of your program.
 
@@ -31732,7 +31733,7 @@ Free every node in @code{list}.
 @subsubsection List of @code{void *}
 
 In C, @code{void *} is the most generic pointer. Usually pointers are
-associated with the type of content they point to. For example @code{int *}
+associated with the type of content they point to. for example, @code{int *}
 means a pointer to an integer. This ancillary information about the
 contents of the memory location is very useful for the compiler, catching
 bad errors and also documentation (it helps the reader see what the address
@@ -31802,7 +31803,7 @@ Free every node in @code{list}.
 
 Positions/sizes in a dataset are conventionally in the @code{size_t} type
 (see @ref{List of size_t}) and it sometimes occurs that you want to parse
-and read the values in a specific order. For example you want to start from
+and read the values in a specific order. for example, you want to start from
 one pixel and add pixels to the list based on their distance to that
 pixel. So that ever time you pop an element from the list, you know it is
 the nearest that has not yet been studied. The @code{gal_list_osizet_t}
@@ -31931,7 +31932,7 @@ Free the doubly linked, ordered @code{sizet_t} list.
 Gnuastro's generic data container has a @code{next} element which enables
 it to be used as a singly-linked list (see @ref{Generic data
 container}). The ability to connect the different data containers offers
-great advantages. For example each column in a table in an independent
+great advantages. for example, each column in a table in an independent
 dataset: with its own name, units, numeric data type (see @ref{Numeric data
 types}). Another application is in Tessellating an input dataset into
 separate tiles or only studying particular regions, or tiles, of a larger
@@ -32010,7 +32011,7 @@ writing them after the processing into an output file 
are some of the most
 common operations. The functions in this section are designed for such
 operations with the known file types. The functions here are thus just
 wrappers around functions of lower-level file type functions of this
-library, for example @ref{FITS files} or @ref{TIFF files}. If the file type
+library, for example, @ref{FITS files} or @ref{TIFF files}. If the file type
 of the input/output file is already known, you can use the functions in
 those sections respectively.
 
@@ -32134,15 +32135,15 @@ displayed in multiple formats:
 @table @asis
 @item Unsigned integer
 For unsigned integers, it is possible to choose from @code{_UDECIMAL}
-(unsigned decimal), @code{_OCTAL} (octal notation, for example @code{125}
+(unsigned decimal), @code{_OCTAL} (octal notation, for example, @code{125}
 in decimal will be displayed as @code{175}), and @code{_HEX} (hexadecimal
-notation, for example @code{125} in decimal will be displayed as
+notation, for example, @code{125} in decimal will be displayed as
 @code{7D}).
 
 @item Floating point
 For floating point, it is possible to display the number in @code{_FLOAT}
-(floating point, for example @code{1500.345}), @code{_EXP} (exponential,
-for example @code{1.500345e+03}), or @code{_GENERAL} which is the best of
+(floating point, for example, @code{1500.345}), @code{_EXP} (exponential,
+for example, @code{1.500345e+03}), or @code{_GENERAL} which is the best of
 the two for the given number.
 @end table
 @end deffn
@@ -32220,7 +32221,7 @@ For more on this macro, see @ref{Configuration 
information}).
 Multi-threaded table reading is not currently applicable to other table 
formats (only for FITS tables).
 
 The output is an individually allocated list of datasets (see @ref{List of 
gal_data_t}) with the same order of the @code{cols} list.
-Note that one column node in the @code{cols} list might give multiple columns 
(for example from regular expressions), in this case, the order of output 
columns that correspond to that one input, are in order of the table (which 
column was read first).
+Note that one column node in the @code{cols} list might give multiple columns 
(for example, from regular expressions), in this case, the order of output 
columns that correspond to that one input, are in order of the table (which 
column was read first).
 So the first requested column is the first popped data structure and so on.
 
 if @code{colmatch!=NULL}, it is assumed to be an array that has at least the 
same number of elements as nodes in the @code{cols} list.
@@ -32402,7 +32403,7 @@ for image type identifiers and @code{datatype} for its 
internal identifiers
 be used to convert between CFITSIO and Gnuastro's type identifiers.
 
 One important issue to consider is that CFITSIO's types are not fixed width
-(for example @code{long} may be 32-bits or 64-bits on different
+(for example, @code{long} may be 32-bits or 64-bits on different
 systems). However, Gnuastro's types are defined by their width. These
 functions will use information on the host system to do the proper
 conversion. To have a portable (usable on different systems) code, is thus
@@ -32618,8 +32619,8 @@ If @code{fitsdate} contains sub-second accuracy and 
@code{subsecstr!=NULL}, then
 So to avoid leaking memory, if a sub-second string is requested, it must be 
freed after calling this function.
 When a sub-second string does not exist (and it is requested), then a value of 
@code{NULL} and NaN will be written in @code{*subsecstr} and @code{*subsec} 
respectively.
 
-This is a very useful function for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
-The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
+This is a very useful function for operations on the FITS date values, for 
example, sorting FITS files by their dates, or finding the time difference 
between two FITS files.
+The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example, the number of days in different 
months, or leap years, etc).
 @end deftypefun
 
 @deftypefun void gal_fits_key_read_from_ptr (fitsfile @code{*fptr}, gal_data_t 
@code{*keysll}, int @code{readcomment}, int @code{readunit})
@@ -32993,7 +32994,7 @@ in @code{tableformat} (macros defined in @ref{Table 
input output}).
 
 This function is just for column information. Therefore it only stores
 meta-data like column name, units and comments. No actual data (contents of
-the columns for example the @code{array} or @code{dsize} elements) will be
+the columns for example, the @code{array} or @code{dsize} elements) will be
 allocated by this function. This is a low-level function particular to
 reading tables in FITS format. To be generic, it is recommended to use
 @code{gal_table_info} which will allow getting information from a variety
@@ -33110,7 +33111,7 @@ Note that @code{filename} and @code{lines} are mutually 
exclusive and one of the
 
 This function is just for column information.
 Therefore it only stores meta-data like column name, units and comments.
-No actual data (contents of the columns for example the @code{array} or 
@code{dsize} elements) will be allocated by this function.
+No actual data (contents of the columns for example, the @code{array} or 
@code{dsize} elements) will be allocated by this function.
 This is a low-level function particular to reading tables in plain text format.
 To be generic, it is recommended to use @code{gal_table_info} which will allow 
getting information from a variety of table formats based on the filename (see 
@ref{Table input output}).
 @end deftypefun
@@ -33141,7 +33142,7 @@ Note that @code{filename} and @code{lines} are mutually 
exclusive and one of the
 @deftypefun {gal_list_str_t *} gal_txt_stdin_read (long 
@code{timeout_microsec})
 @cindex Standard input
 Read the complete standard input and return a list of strings with each line 
(including the new-line character) as one node of that list.
-If the standard input is already filled (for example connected to another 
program's output with a pipe), then this function will parse the whole stream.
+If the standard input is already filled (for example, connected to another 
program's output with a pipe), then this function will parse the whole stream.
 
 If Standard input is not pre-configured and the @emph{first line} is 
typed/written in the terminal before @code{timeout_microsec} micro-seconds, it 
will continue parsing until reaches an end-of-file character (@key{CTRL-D} 
after a new-line on the keyboard) with no time limit.
 If nothing is entered before @code{timeout_microsec} micro-seconds, it will 
return @code{NULL}.
@@ -33279,7 +33280,7 @@ For a more complete introduction, please see 
@ref{Recognized file formats}.
 To provide high quality graphics, the Postscript language is a vectorized 
format, therefore pixels (elements of a ``rasterized'' format) are not defined 
in their context.
 
 To display rasterized images, PostScript does allow arrays of pixels.
-However, since the over-all EPS file may contain many vectorized elements (for 
example borders, text, or other lines over the text) and interpreting them is 
not trivial or necessary within Gnuastro's scope, Gnuastro only provides some 
functions to write a dataset (in the @code{gal_data_t} format, see @ref{Generic 
data container}) into EPS.
+However, since the over-all EPS file may contain many vectorized elements (for 
example, borders, text, or other lines over the text) and interpreting them is 
not trivial or necessary within Gnuastro's scope, Gnuastro only provides some 
functions to write a dataset (in the @code{gal_data_t} format, see @ref{Generic 
data container}) into EPS.
 
 @deffn  Macro GAL_EPS_MARK_COLNAME_TEXT
 @deffnx Macro GAL_EPS_MARK_COLNAME_FONT
@@ -33437,7 +33438,7 @@ However you still need to be cautious in the following 
scenarios below.
 Many users or operating systems may still use an older version.
 @item
 The @code{wcsprm} structure of WCSLIB is not thread-safe: you can't use the 
same pointer on multiple threads.
-For example if you use @code{gal_wcs_img_to_world} simultaneously on multiple 
threads, you shouldn't pass the same @code{wcsprm} structure pointer.
+for example, if you use @code{gal_wcs_img_to_world} simultaneously on multiple 
threads, you shouldn't pass the same @code{wcsprm} structure pointer.
 You can use @code{gal_wcs_copy} to keep and use separate copies the main 
structure within each thread, and later free the copies with 
@code{gal_wcs_free}.
 @end itemize
 
@@ -33555,7 +33556,7 @@ It will determine the format of the WCS when it is 
later written to file with @c
 So if you do not want to write the WCS into a file later, just give it a value 
of @code{0}.
 For more on the difference between these modes, see the description of 
@option{--wcslinearmatrix} in @ref{Input output options}.
 
-If you do not want to search the full FITS header for WCS-related FITS 
keywords (for example due to conflicting keywords), but only a specific range 
of the header keywords you can use the @code{hstartwcs} and @code{hendwcs} 
arguments to specify the keyword number range (counting from zero).
+If you do not want to search the full FITS header for WCS-related FITS 
keywords (for example, due to conflicting keywords), but only a specific range 
of the header keywords you can use the @code{hstartwcs} and @code{hendwcs} 
arguments to specify the keyword number range (counting from zero).
 If @code{hendwcs} is larger than @code{hstartwcs}, then only keywords in the 
given range will be checked.
 Hence, to ignore this feature (and search the full FITS header), give both 
these arguments the same value.
 
@@ -33584,7 +33585,7 @@ A complete working example is given in the description 
of @code{gal_wcs_create}.
 
 @deftypefun {char *} gal_wcs_dimension_name (struct wcsprm @code{*wcs}, size_t 
@code{dimension})
 Return an allocated string array (that should be freed later) containing the 
first part of the @code{CTYPEi} FITS keyword (which contains the dimension name 
in the FITS standard).
-For example if @code{CTYPE1} is @code{RA---TAN}, the string that function 
returns will be @code{RA}.
+for example, if @code{CTYPE1} is @code{RA---TAN}, the string that function 
returns will be @code{RA}.
 Recall that the second component of @code{CTYPEi} contains the type of 
projection.
 @end deftypefun
 
@@ -33680,7 +33681,7 @@ Read the given WCS structure and return its coordinate 
system as one of Gnuastro
 @deftypefun {struct wcsprm *} gal_wcs_coordsys_convert (struct wcsprm 
@code{*inwcs}, int @code{coordsysid})
 Return a newly allocated WCS structure with the @code{coordsysid} coordinate 
system identifier.
 The Gnuastro WCS distortion identifiers are defined in the 
@code{GAL_WCS_COORDSYS_*} macros mentioned above.
-Since the returned dataset is newly allocated, if you do not need the original 
dataset after this, use the WCSLIB library function @code{wcsfree} to free the 
input, for example @code{wcsfree(inwcs)}.
+Since the returned dataset is newly allocated, if you do not need the original 
dataset after this, use the WCSLIB library function @code{wcsfree} to free the 
input, for example, @code{wcsfree(inwcs)}.
 @end deftypefun
 
 @deftypefun int gal_wcs_distortion_from_string (char @code{*distortion})
@@ -33767,7 +33768,7 @@ Convert the linked list of world coordinates in 
@code{coords} to a linked list o
 @code{coords} must be a linked list of data structures of float64 (`double') 
type, see@ref{Linked lists} and @ref{List of gal_data_t}.
 The top (first popped/read) node of the linked list must be the first WCS 
coordinate (RA in an image usually) etc.
 Similarly, the top node of the output will be the first image coordinate (in 
the FITS standard).
-In case WCSLIB fails to convert any of the coordinates (for example the RA of 
one coordinate is given as 400!), the respective element in the output will be 
written as NaN.
+In case WCSLIB fails to convert any of the coordinates (for example, the RA of 
one coordinate is given as 400!), the respective element in the output will be 
written as NaN.
 
 If @code{inplace} is zero, then the output will be a newly allocated list and 
the input list will be untouched.
 However, if @code{inplace} is non-zero, the output values will be written into 
the input's already allocated array and the returned pointer will be the same 
pointer to @code{coords} (in other words, you can ignore the returned value).
@@ -33789,7 +33790,7 @@ See the description of @code{gal_wcs_world_to_img} for 
more details.
 When the dataset's type and other information are already known, any 
programming language (including C) provides some very good tools for various 
operations (including arithmetic operations like addition) on the dataset with 
a simple loop.
 However, as an author of a program, making assumptions about the type of data, 
its dimensions and other basic characteristics will come with a large 
processing burden.
 
-For example if you always read your data as double precision floating points 
for a simple operation like addition with an integer constant, you will be 
wasting a lot of CPU and memory when the input dataset is @code{int32} type for 
example (see @ref{Numeric data types}).
+for example, if you always read your data as double precision floating points 
for a simple operation like addition with an integer constant, you will be 
wasting a lot of CPU and memory when the input dataset is @code{int32} type for 
example, (see @ref{Numeric data types}).
 This overhead may be small for small images, but as you scale your process up 
and work with hundred/thousands of files that can be very large, this overhead 
will take a significant portion of the processing power.
 The functions and macros in this section are designed precisely for this 
purpose: to allow you to do any of the defined operations on any dataset with 
no overhead (in the native type of the dataset).
 
@@ -33809,7 +33810,7 @@ So first we will review the constants for the 
recognized flags and operators and
 @cindex Bitwise Or
 Bit-wise flags to pass onto @code{gal_arithmetic} (see below).
 To pass multiple flags, use the bitwise-or operator.
-For example if you pass @code{GAL_ARITHMETIC_FLAG_INPLACE | 
GAL_ARITHMETIC_FLAG_NUMOK}, then the operation will be done in-place (without 
allocating a new array), and a single number will also be acceptable (that will 
be applied to all the pixels).
+for example, if you pass @code{GAL_ARITHMETIC_FLAG_INPLACE | 
GAL_ARITHMETIC_FLAG_NUMOK}, then the operation will be done in-place (without 
allocating a new array), and a single number will also be acceptable (that will 
be applied to all the pixels).
 Each flag is described below:
 
 @table @code
@@ -33822,15 +33823,15 @@ Hence the inputs are no longer usable after 
@code{gal_arithmetic}.
 
 @item GAL_ARITHMETIC_FLAG_NUMOK
 It is acceptable to use a number and an array together.
-For example if you want to add all the pixels in an image with a single number 
you can pass this flag to avoid having to allocate a constant array the size of 
the image (with all the pixels having the same number).
+for example, if you want to add all the pixels in an image with a single 
number you can pass this flag to avoid having to allocate a constant array the 
size of the image (with all the pixels having the same number).
 
 @item GAL_ARITHMETIC_FLAG_ENVSEED
-Use the pre-defined environment variable for setting the random number 
generator seed when an operator needs it (for example @code{mknoise-sigma}).
+Use the pre-defined environment variable for setting the random number 
generator seed when an operator needs it (for example, @code{mknoise-sigma}).
 For more on random number generation in Gnuastro see @ref{Generating random 
numbers}.
 
 @item GAL_ARITHMETIC_FLAG_QUIET
 Do Not print any warnings or messages for operators that may benefit from it.
-For example by default the @code{mknoise-sigma} operator prints the random 
number generator function and seed that it used (in case the user wants to 
reproduce this result later).
+for example, by default the @code{mknoise-sigma} operator prints the random 
number generator function and seed that it used (in case the user wants to 
reproduce this result later).
 By activating this bit flag to the call, that extra information is not printed 
on the command-line.
 
 @item GAL_ARITHMETIC_FLAGS_BASIC
@@ -34139,7 +34140,7 @@ Each operand has a type of `@code{gal_data_t *}' (see 
last paragraph with exampl
 
 If the operator can work on multiple threads, the number of threads can be 
specified with @code{numthreads}.
 When the operator is single-threaded, @code{numthreads} will be ignored.
-Special conditions can also be specified with the @code{flag} operator (a 
bit-flag with bits described above, for example 
@code{GAL_ARITHMETIC_FLAG_INPLACE} or @code{GAL_ARITHMETIC_FLAG_FREE}).
+Special conditions can also be specified with the @code{flag} operator (a 
bit-flag with bits described above, for example, 
@code{GAL_ARITHMETIC_FLAG_INPLACE} or @code{GAL_ARITHMETIC_FLAG_FREE}).
 
 @code{gal_arithmetic} is a multi-argument function (like C's @code{printf}).
 In other words, the number of necessary arguments is not fixed and depends on 
the value to @code{operator}.
@@ -34201,7 +34202,7 @@ Note that @code{string} must only contain the single 
operator name and nothing e
 
 @deftypefun {char *} gal_arithmetic_operator_string (int @code{operator})
 Return the human-readable standard string that corresponds to the given 
operator.
-For example when the input is @code{GAL_ARITHMETIC_OP_PLUS} or 
@code{GAL_ARITHMETIC_OP_MEAN}, the strings @code{+} or @code{mean} will be 
returned.
+for example, when the input is @code{GAL_ARITHMETIC_OP_PLUS} or 
@code{GAL_ARITHMETIC_OP_MEAN}, the strings @code{+} or @code{mean} will be 
returned.
 @end deftypefun
 
 @deftypefun {gal_data_t *} gal_arithmetic_load_col (char @code{*str}, int 
@code{searchin}, int @code{ignorecase}, size_t @code{minmapsize}, int 
@code{quietmmap})
@@ -34393,7 +34394,7 @@ number of dimensions is irrelevant, it will be parsed 
contiguously.
 
 @item tilesll
 The list of input tiles (see @ref{List of gal_data_t}). Internally, it
-might be stored as an array (for example the output of
+might be stored as an array (for example, the output of
 @code{gal_tile_series_from_minmax} described above), but this function
 does not care, it will parse the @code{next} elements to go to the next
 tile. This function will not pop-from or free the @code{tilesll}, it will
@@ -34731,7 +34732,7 @@ On the right, you can see how the tiles are stored in 
memory (and shown if you s
 @end example
 
 As a result, if your values are stored in same order as the tiles, and you
-want them in over-all memory (for example to save as a FITS file), you need
+want them in over-all memory (for example, to save as a FITS file), you need
 to permute the values:
 
 @example
@@ -34854,7 +34855,7 @@ When there is an overlap, the coordinates of the first 
and last pixels of the ov
 @subsection Polygons (@file{polygon.h})
 
 Polygons are commonly necessary in image processing.
-For example in Crop they are used for cutting out non-rectangular regions of a 
image (see @ref{Crop}), and in Warp, for mapping different pixel grids over 
each other (see @ref{Warp}).
+for example, in Crop they are used for cutting out non-rectangular regions of 
a image (see @ref{Crop}), and in Warp, for mapping different pixel grids over 
each other (see @ref{Warp}).
 
 @cindex Convex polygons
 @cindex Concave polygons
@@ -34884,7 +34885,7 @@ The largest number of vertices a polygon can have in 
this library.
 @deffn Macro GAL_POLYGON_ROUND_ERR
 @cindex Round-off error
 We have to consider floating point round-off errors when dealing with polygons.
-For example we will take @code{A} as the maximum of @code{A} and @code{B} when 
@code{A>B-GAL_POLYGON_ROUND_ERR}.
+for example, we will take @code{A} as the maximum of @code{A} and @code{B} 
when @code{A>B-GAL_POLYGON_ROUND_ERR}.
 @end deffn
 
 @deftypefun void gal_polygon_vertices_sort_convex (double @code{*in}, size_t 
@code{n}, size_t @code{*ordinds})
@@ -34897,7 +34898,7 @@ The input (@code{in}) is an array containing the 
coordinates (two values) of eac
 So @code{in} should have @code{2*n} elements.
 The output (@code{ordinds}) is an array with @code{n} elements specifying the 
indices in order.
 This array must have been allocated before calling this function.
-The indexes are output for more generic usage, for example in a homographic 
transform (necessary in warping an image, see @ref{Linear warping basics}), the 
necessary order of vertices is the same for all the pixels.
+The indexes are output for more generic usage, for example, in a homographic 
transform (necessary in warping an image, see @ref{Linear warping basics}), the 
necessary order of vertices is the same for all the pixels.
 In other words, only the positions of the vertices change, not the way they 
need to be ordered.
 Therefore, this function would only be necessary once.
 
@@ -34953,7 +34954,7 @@ one of the edges of a polygon, this will return 
@code{0}.
 Returns  @code{1} if the sorted polygon has a counter-clockwise orientation 
and @code{0} otherwise.
 This function uses the concept of ``winding'', which defines the relative 
order in which the vertices of a polygon are listed to determine the 
orientation of vertices.
 For complex polygons (where edges, or sides, intersect), the most significant 
orientation is returned.
-In a complex polygon, when the alternative windings are equal (for example an 
@code{8}-shape) it will return @code{1} (as if it was counter-clockwise).
+In a complex polygon, when the alternative windings are equal (for example, an 
@code{8}-shape) it will return @code{1} (as if it was counter-clockwise).
 Note that the polygon vertices have to be sorted before calling this function.
 @end deftypefun
 
@@ -35051,7 +35052,7 @@ below.
 @deffn {Global variable} {gal_qsort_index_single}
 @cindex Thread safety
 @cindex Multi-threaded operation
-Pointer to an array (for example @code{float *} or @code{int *}) to use as
+Pointer to an array (for example, @code{float *} or @code{int *}) to use as
 a reference in @code{gal_qsort_index_single_TYPE_d} or
 @code{gal_qsort_index_single_TYPE_i}, see the explanation of these
 functions for more. Note that if @emph{more than one} array is to be sorted
@@ -35081,7 +35082,7 @@ struct gal_qsort_index_multi
 When passed to @code{qsort}, this function will sort a @code{TYPE} array in
 decreasing order (first element will be the largest). Please replace
 @code{TYPE} (in the function name) with one of the @ref{Numeric data
-types}, for example @code{gal_qsort_int32_d}, or
+types}, for example, @code{gal_qsort_int32_d}, or
 @code{gal_qsort_float64_d}.
 @end deftypefun
 
@@ -35089,7 +35090,7 @@ types}, for example @code{gal_qsort_int32_d}, or
 When passed to @code{qsort}, this function will sort a @code{TYPE} array in
 increasing order (first element will be the smallest). Please replace
 @code{TYPE} (in the function name) with one of the @ref{Numeric data
-types}, for example @code{gal_qsort_int32_i}, or
+types}, for example, @code{gal_qsort_int32_i}, or
 @code{gal_qsort_float64_i}.
 @end deftypefun
 
@@ -35284,7 +35285,7 @@ If it is not possible for the other branch to have a 
better nearest neighbor, th
 As an example, let's use the k-d tree that was created in the example of 
@code{gal_kdtree_create} (above) and find the nearest row to a given coordinate 
(@code{point}).
 This will be a very common scenario, especially in large and multi-dimensional 
datasets where the k-d tree creation can take long and you do not want to 
re-create the k-d tree every time.
 In the @code{gal_kdtree_create} example output, we also wrote the k-d tree 
root index as a FITS keyword (@code{KDTROOT}), so after loading the two table 
data (input coordinates and k-d tree), we will read the root from the FITS 
keyword.
-This is a very simple example, but the scalability is clear: for example it is 
trivial to parallelize (see @ref{Library demo - multi-threaded operation}).
+This is a very simple example, but the scalability is clear: for example, it 
is trivial to parallelize (see @ref{Library demo - multi-threaded operation}).
 
 @example
 #include <stdio.h>
@@ -35418,7 +35419,7 @@ have any type), see above for the definition of 
permutation.
 @cindex Coordinate matching
 Matching is often necessary when two measurements of the same points have been 
done using different instruments (or hardware), different software or different 
configurations of the same software.
 In other words, you have two catalogs or tables, and each has N columns 
containing the N-dimensional ``coordinate'' values of each point.
-Each table can have other columns too, for example one can have brightness 
measurements in one filter, and another can have morphology measurements or etc.
+Each table can have other columns too, for example, one can have brightness 
measurements in one filter, and another can have morphology measurements or etc.
 
 The matching functions here will use the coordinate columns of the two tables 
to find a permutation for each, and the total number of matched rows 
(@mymath{N_{match}}).
 This will enable you to match by the positions if you like.
@@ -35458,7 +35459,7 @@ When the aperture is an ellipse, distances between the 
points are also calculate
 
 @strong{Output permutations ignore internal sorting}: the output permutations 
will correspond to the initial inputs.
 Therefore, even when @code{inplace!=0} (and this function re-arranges the 
inputs in place), the output permutation will correspond to original (possibly 
non-sorted) inputs. The reason for this is that you rarely want to permute the 
actual positional columns after the match.
-Usually, you also have other columns (for example the brightness, morphology 
and etc) and you want to find how they differ between the objects that match.
+Usually, you also have other columns (for example, the brightness, morphology 
and etc) and you want to find how they differ between the objects that match.
 Once you have the permutations, they can be applied to those other columns 
(see @ref{Permutations}) and the higher-level processing can continue.
 So if you do not need the coordinate columns for the rest of your analysis, it 
is better to set @code{inplace=1}.
 
@@ -35573,7 +35574,7 @@ necessary, this function is much more efficient than 
calling
 Return the standard deviation from the values that can be obtained in a single 
pass through the distribution: @code{sum}: the sum of the elements, 
@code{sump2}: the sum of the power-of-2 of each element, and @code{num}: the 
number of elements.
 
 This is a low-level function that is only useful after the distribution of 
values has been parsed (and the three input arguments are calculated).
-It is the lower-level function that is used in functions like 
@code{gal_statistics_std}, or other components of Gnuastro that measure the 
standard deviation (for example MakeCatalog's @option{--std} column).
+It is the lower-level function that is used in functions like 
@code{gal_statistics_std}, or other components of Gnuastro that measure the 
standard deviation (for example, MakeCatalog's @option{--std} column).
 @end deftypefun
 
 @cindex Median
@@ -35684,7 +35685,7 @@ can be very useful when repeated checks are necessary). 
When
 will not toggle the flags.
 
 If you want to re-check a dataset with the blank-value-check flag already
-set (for example if you have made changes to it), then explicitly set the
+set (for example, if you have made changes to it), then explicitly set the
 @code{GAL_DATA_FLAG_SORT_CH} bit to zero before calling this function. When
 there are no other flags, you can simply set the flags to zero (with
 @code{input->flag=0}), otherwise you can use this expression:
@@ -35844,7 +35845,7 @@ Find the first positive outlier (if 
@code{pos1_neg0!=0}) in the @code{input} dis
 When @code{pos1_neg0==0}, the same algorithm goes to the start of the dataset.
 The returned dataset contains a single element: the first positive outlier.
 It is one of the dataset's elements, in the same type as the input.
-If the process fails for any reason (for example no outlier was found), a 
@code{NULL} pointer will be returned.
+If the process fails for any reason (for example, no outlier was found), a 
@code{NULL} pointer will be returned.
 
 All (possibly existing) blank elements are first removed from the input 
dataset, then it is sorted.
 A sliding window of @code{window_size} elements is parsed over the dataset.
@@ -36107,7 +36108,7 @@ Being a list, helps in easily printing the output 
columns to a table (see @ref{T
 Binary datasets only have two (usable) values: 0 (also known as background)
 or 1 (also known as foreground). They are created after some binary
 classification is applied to the dataset. The most common is thresholding:
-for example in an image, pixels with a value above the threshold are given
+for example, in an image, pixels with a value above the threshold are given
 a value of 1 and those with a value less than the threshold are assigned a
 value of 0.
 
@@ -36125,12 +36126,12 @@ The classification is done by the distance from 
element center to the
 neighbor's center. The nearest immediate neighbors have a connectivity of
 1, the second nearest class of neighbors have a connectivity of 2 and so
 on. In total, the largest possible connectivity for data with @code{ndim}
-dimensions is @code{ndim}. For example in a 2D dataset, 4-connected
+dimensions is @code{ndim}. for example, in a 2D dataset, 4-connected
 neighbors (that share an edge and have a distance of 1 pixel) have a
 connectivity of 1. The other 4 neighbors that only share a vertice (with a
 distance of @mymath{\sqrt{2}} pixels) have a connectivity of
 2. Conventionally, the class of connectivity-2 neighbors also includes the
-connectivity 1 neighbors, so for example we call them 8-connected neighbors
+connectivity 1 neighbors, so for example, we call them 8-connected neighbors
 in 2D datasets.
 
 Ideally, one bit is sufficient for each element of a binary
@@ -36276,7 +36277,7 @@ Although the adjacency matrix as used here is 
symmetric, currently this function
 Find the number of connected labels and new labels based on an adjacency list.
 The output of this function is identical to that of 
@code{gal_binary_connected_adjacency_matrix}.
 But the major difference is that it uses a list of connected labels to each 
label instead of a square adjacency matrix.
-This is done because when the number of labels becomes very large (for example 
on the scale of 100,000), the adjacency matrix can consume more than 10GB of 
RAM!
+This is done because when the number of labels becomes very large (for 
example, on the scale of 100,000), the adjacency matrix can consume more than 
10GB of RAM!
 
 The input list has the following format: it is an array of pointers to 
@code{gal_list_sizet_t *} (or @code{gal_list_sizet_t **}).
 The array has @code{number} elements and each @code{listarr[i]} is a linked 
list of @code{gal_list_sizet_t *}.
@@ -36347,7 +36348,7 @@ However, when your objects/targets are touching, the 
simple connected
 components algorithm is not enough and a still higher-level labeling
 mechanism is necessary. This brings us to the necessity of the functions in
 this part of Gnuastro's library. The main inputs to the functions in this
-section are already labeled datasets (for example with the connected
+section are already labeled datasets (for example, with the connected
 components algorithm above).
 
 Each of the labeled regions are independent of each other (the labels
@@ -36356,7 +36357,7 @@ datasets, it is often useful to process each label on 
independent CPU
 threads in parallel rather than in series. Therefore the functions of this
 section actually use an array of pixel/element indices (belonging to each
 label/class) as the main identifier of a region. Using indices will also
-allow processing of overlapping labels (for example in deblending
+allow processing of overlapping labels (for example, in deblending
 problems). Just note that overlapping labels are not yet implemented, but
 planned. You can use @code{gal_label_indexs} to generate lists of indices
 belonging to separate classes from the labeled input.
@@ -36521,7 +36522,7 @@ measurement is made) will not be included in the output 
table.
 This function is initially intended for a multi-threaded environment.
 In such cases, you will be writing arrays of clump measures from different 
regions in parallel into an array of @code{gal_data_t}s.
 You can simply allocate (and initialize), such an array with the 
@code{gal_data_array_calloc} function in @ref{Arrays of datasets}.
-For example if the @code{gal_data_t} array is called @code{array}, you can 
pass @code{&array[i]} as @code{sig}.
+for example, if the @code{gal_data_t} array is called @code{array}, you can 
pass @code{&array[i]} as @code{sig}.
 
 Along with some other functions in @code{label.h}, this function was initially 
written for @ref{Segment}.
 The description of the parameter used to measure a clump's significance is 
fully given in @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}.
@@ -36557,7 +36558,7 @@ Note that the @code{indexs->array} is not re-allocated 
to its new size at the en
 But since @code{indexs->dsize[0]} and @code{indexs->size} have new values 
after this function is returned, the extra elements just will not be used until 
they are ultimately freed by @code{gal_data_free}.
 
 Connectivity is a value between @code{1} (fewest number of neighbors) and the 
number of dimensions in the input (most number of neighbors).
-For example in a 2D dataset, a connectivity of @code{1} and @code{2} 
corresponds to 4-connected and 8-connected neighbors.
+for example, in a 2D dataset, a connectivity of @code{1} and @code{2} 
corresponds to 4-connected and 8-connected neighbors.
 @end deftypefun
 
 
@@ -36621,7 +36622,7 @@ is much faster.
 @cindex Sky line
 @cindex Interpolation
 During data analysis, it happens that parts of the data cannot be given a 
value, but one is necessary for the higher-level analysis.
-For example a very bright star saturated part of your image and you need to 
fill in the saturated pixels with some values.
+for example, a very bright star saturated part of your image and you need to 
fill in the saturated pixels with some values.
 Another common usage case are masked sky-lines in 1D spectra that similarly 
need to be assigned a value for higher-level analysis.
 In other situations, you might want a value in an arbitrary point: between the 
elements/pixels where you have data.
 The functions described in this section are for such operations.
@@ -36662,7 +36663,7 @@ If @code{onlyblank} is non-zero, then only blank 
elements will be interpolated a
 This function is multi-threaded and will run on @code{numthreads} threads (see 
@code{gal_threads_number} in @ref{Multithreaded programming}).
 
 @code{tl} is Gnuastro's tessellation structure used to define tiles over an 
image and is fully described in @ref{Tile grid}.
-When @code{tl!=NULL}, then it is assumed that the @code{input->array} contains 
one value per tile and interpolation will respect certain tessellation 
properties, for example to not interpolate over channel borders.
+When @code{tl!=NULL}, then it is assumed that the @code{input->array} contains 
one value per tile and interpolation will respect certain tessellation 
properties, for example, to not interpolate over channel borders.
 
 If several datasets have the same set of blank values, you do not need to call 
this function multiple times.
 When @code{aslinkedlist} is non-zero, then @code{input} will be seen as a 
@ref{List of gal_data_t}.
@@ -36760,7 +36761,7 @@ Once @code{gsl_spline} has been initialized by this 
function, the
 interpolation can be evaluated for any X value within the non-blank range
 of the input using @code{gsl_spline_eval} or @code{gsl_spline_eval_e}.
 
-For example in the small program below (@file{sample-interp.c}), we read the 
first two columns of the table in @file{table.txt} and feed them to this 
function to later estimate the values in the second column for three selected 
points.
+for example, in the small program below (@file{sample-interp.c}), we read the 
first two columns of the table in @file{table.txt} and feed them to this 
function to later estimate the values in the second column for three selected 
points.
 You can use @ref{BuildProgram} to compile and run this function, see 
@ref{Library demo programs} for more.
 
 Contents of the @file{table.txt} file:
@@ -36939,7 +36940,7 @@ This ensures that the center of the central pixel lies 
at the requested central
 
 @item gal_data_t *center
 WCS coordinate of the center of the central pixel of the output.
-The units depend on the WCS, for example if the @code{CUNITi} keywords are 
@code{deg}, it is in degrees.
+The units depend on the WCS, for example, if the @code{CUNITi} keywords are 
@code{deg}, it is in degrees.
 This dataset should have a type of @code{GAL_TYPE_FLOAT64} and contain exactly 
two values.
 
 @item size_t numthreads
@@ -37043,7 +37044,7 @@ It is upto the caller to have the space for three 
32-bit floating point numbers
 @cindex Git
 @cindex libgit2
 Git is one of the most common tools for version control and it can often be
-useful during development, for example see @code{COMMIT} keyword in
+useful during development, for example, see @code{COMMIT} keyword in
 @ref{Output FITS files}. At installation time, Gnuastro will also check for
 the existence of libgit2, and store the value in the
 @code{GAL_CONFIG_HAVE_LIBGIT2}, see @ref{Configuration information} and
@@ -37115,7 +37116,7 @@ The functions in this section are defined to facilitate 
the easy conversion betw
 If there are certain conversions that are useful for your work, please get in 
touch.
 
 @deftypefun int gal_units_extract_decimal (char @code{*convert}, const char 
@code{*delimiter}, double @code{*args}, size_t @code{n})
-Parse the input @code{convert} string with a certain delimiter (for example 
@code{01:23:45}, where the delimiter is @code{":"}) as multiple numbers (for 
example 1,23,45) and write them as an array in the space that @code{args} is 
pointing to.
+Parse the input @code{convert} string with a certain delimiter (for example, 
@code{01:23:45}, where the delimiter is @code{":"}) as multiple numbers (for 
example, 1,23,45) and write them as an array in the space that @code{args} is 
pointing to.
 The expected number of values in the string is specified by the @code{n} 
argument (3 in the example above).
 
 If the function succeeds, it will return 1, otherwise it will return 0 and the 
values may not be fully written into @code{args}.
@@ -37648,7 +37649,7 @@ The important thing is that each action should have its 
own unique ID (counting
 You can then follow the process below and use each thread to work on all the 
targets that are assigned to it.
 Recall that spinning-off threads is itself an expensive process and we do not 
want to spin-off one thread for each target (see the description of 
@code{gal_threads_dist_in_threads} in @ref{Gnuastro's thread related functions}.
 
-There are many (more complicated, real-world) examples of using 
@code{gal_threads_spin_off} in Gnuastro's actual source code, you can see them 
by searching for the @code{gal_threads_spin_off} function from the top source 
(after unpacking the tarball) directory (for example with this command):
+There are many (more complicated, real-world) examples of using 
@code{gal_threads_spin_off} in Gnuastro's actual source code, you can see them 
by searching for the @code{gal_threads_spin_off} function from the top source 
(after unpacking the tarball) directory (for example, with this command):
 
 @example
 $ grep -r gal_threads_spin_off ./
@@ -38220,7 +38221,7 @@ All the Gnuastro programs provide very low level and 
modular operations
 (modeled on GNU Coreutils). Almost all the basic command-line programs like
 @command{ls}, @command{cp} or @command{rm} on GNU/Linux operating systems
 are part of GNU Coreutils. This enables you to use shell scripting
-languages (for example GNU Bash) to operate on a large number of files or
+languages (for example, GNU Bash) to operate on a large number of files or
 do very complex things through the creative combinations of these tools
 that the authors had never dreamed of. We have put a few simple examples in
 @ref{Tutorials}.
@@ -38230,7 +38231,7 @@ that the authors had never dreamed of. We have put a 
few simple examples in
 @cindex Python Matplotlib
 @cindex Matplotlib, Python
 @cindex PGFplots in @TeX{} or @LaTeX{}
-For example all the analysis output can be saved as ASCII tables which can
+for example, all the analysis output can be saved as ASCII tables which can
 be fed into your favorite plotting program to inspect visually. Python's
 Matplotlib is very useful for fast plotting of the tables to immediately
 check your results. If you want to include the plots in a document, you can
@@ -38239,7 +38240,7 @@ such operations in Gnuastro. In short, Bash can act as 
a glue to connect
 the inputs and outputs of all these various Gnuastro programs (and other
 programs) in any fashion. Of course, Gnuastro's programs are just
 front-ends to the main workhorse (@ref{Gnuastro library}), allowing a user
-to create their own programs (for example with @ref{BuildProgram}). So once
+to create their own programs (for example, with @ref{BuildProgram}). So once
 the functions within programs become mature enough, they will be moved
 within the libraries for even more general applications.
 
@@ -38273,7 +38274,7 @@ readability that the whole package follows the same 
convention.
 
 @item
 The code must be easy to read by eye. So when the order of several lines
-within a function does not matter (for example when defining variables at
+within a function does not matter (for example, when defining variables at
 the start of a function). You should put the lines in the order of
 increasing length and group the variables with similar types such that this
 half-pyramid of declarations becomes most visible. If the reader is
@@ -38323,7 +38324,7 @@ most terminal emulators. If you use Emacs, Gnuastro 
sets the 75 character
 For long comments you can use press @key{Alt-q} in Emacs to separate them
 into separate lines automatically. For long literal strings, you can use
 the fact that in C, two strings immediately after each other are
-concatenated, for example @code{"The first part, " "and the second part."}.
+concatenated, for example, @code{"The first part, " "and the second part."}.
 Note the space character in the end of the first part. Since they are now
 separated, you can easily break a long literal string into several lines
 and adhere to the maximum 75 character line length policy.
@@ -38359,7 +38360,7 @@ the GNU Portability Library (Gnulib) will use for a 
unified environment
 use the GNU C library.
 
 @item
-The C library header files, for example @file{stdio.h}, @file{stdlib.h}, or
+The C library header files, for example, @file{stdio.h}, @file{stdlib.h}, or
 @file{math.h}.
 @item
 Installed library header files, including Gnuastro's installed headers (for
@@ -38372,7 +38373,7 @@ Gnuastro's internal headers (that are not installed), 
for example
 For programs, the @file{main.h} file (which is needed by the next group of
 headers).
 @item
-That particular program's header files, for example @file{mkprof.h}, or
+That particular program's header files, for example, @file{mkprof.h}, or
 @file{noisechisel.h}.
 @end enumerate
 
@@ -38424,7 +38425,7 @@ the file in which they are defined as prefix, using 
underscores to separate
 words@footnote{The convention to use underscores to separate words, called
 ``snake case'' (or ``snake_case''). This is also recommended by the GNU
 coding standards.}. The same applies to exported macros, but in upper case.
-For example in Gnuastro's top source directory, the prototype of function
+for example, in Gnuastro's top source directory, the prototype of function
 @code{gal_box_border_from_center} is in @file{lib/gnuastro/box.h}, and the
 macro @code{GAL_POLYGON_MAX_CORNERS} is defined in
 @code{lib/gnuastro/polygon.h}.
@@ -38598,7 +38599,7 @@ The functions to do both these jobs must be defined in 
other source files.
 @cindex Root parameter structure
 @cindex Main parameters C structure
 All the major parameters which will be used in the program must be stored in a 
structure which is defined in @file{main.h}.
-The name of this structure is usually @code{prognameparams}, for example 
@code{cropparams} or @code{noisechiselparams}.
+The name of this structure is usually @code{prognameparams}, for example, 
@code{cropparams} or @code{noisechiselparams}.
 So @code{#include "main.h"} will be a staple in all the source codes of the 
program.
 It is also regularly the first (and only) argument most of the program's 
functions which greatly helps in readability.
 
@@ -38611,7 +38612,7 @@ The main root structure of all programs contains at 
least one instance of the @c
 This structure will keep the values to all common options in Gnuastro's 
programs (see @ref{Common options}).
 This top root structure is conveniently called @code{p} (short for parameters) 
by all the functions in the programs and the common options parameters within 
it are called @code{cp}.
 With this convention any reader can immediately understand where to look for 
the definition of one parameter.
-For example you know that @code{p->cp->output} is in the common parameters 
while @code{p->threshold} is in the program's parameters.
+for example, you know that @code{p->cp->output} is in the common parameters 
while @code{p->threshold} is in the program's parameters.
 
 @cindex Structure de-reference operator
 @cindex Operator, structure de-reference
@@ -38662,7 +38663,7 @@ It also makes all the errors related to input appear 
before the processing begin
 
 @item progname.c, progname.h
 @cindex Top processing source file
-The high-level processing functions in each program are in a file named 
@file{progname.c}, for example @file{crop.c} or @file{noisechisel.c}.
+The high-level processing functions in each program are in a file named 
@file{progname.c}, for example, @file{crop.c} or @file{noisechisel.c}.
 The function within these files which @code{main} calls is also named after 
the program, for example
 
 @example
@@ -38700,7 +38701,7 @@ This shell script is used for implementing 
auto-completion features when running
 For more on the concept of shell auto-completion and how it is managed in 
Gnuastro, see @ref{Bash programmable completion}.
 
 These files assume a set of common shell functions that have the prefix 
@code{_gnuastro_autocomplete_} in their name and are defined in 
@file{bin/complete.bash.in} (of the source directory, and under version 
control) and @file{bin/complete.bash.built} (built during the building of 
Gnuastro in the build directory).
-During Gnuastro's build, all these Bash completion files are merged into one 
file that is installed and the user can @code{source} them into their Bash 
startup file, for example see @ref{Quick start}.
+During Gnuastro's build, all these Bash completion files are merged into one 
file that is installed and the user can @code{source} them into their Bash 
startup file, for example, see @ref{Quick start}.
 
 @end vtable
 
@@ -38731,7 +38732,7 @@ already bootstrapped Gnuastro, if not, please see 
@ref{Bootstrapping}.
 
 @enumerate
 @item
-Select a name for your new program (for example @file{myprog}).
+Select a name for your new program (for example, @file{myprog}).
 
 @item
 Copy the @file{TEMPLATE} directory to a directory with your program's name:
@@ -39028,7 +39029,7 @@ VPATH builds)'' in the Automake manual for a nice 
explanation.
 Because of this, any necessary inputs that are distributed in the
 tarball@footnote{In many cases, the inputs of a test are outputs of
 previous tests, this does not apply to this class of inputs. Because all
-outputs of previous tests are in the ``build tree''.}, for example the
+outputs of previous tests are in the ``build tree''.}, for example, the
 catalogs necessary for checks in MakeProfiles and Crop, must be identified
 with the @command{$topsrc} prefix instead of @command{../} (for the top
 source directory that is unpacked). This @command{$topsrc} variable points
@@ -39218,7 +39219,7 @@ Since the @option{--help} option will list all the 
options available in Gnuastro
 Note that the actual TAB completion in Gnuastro is a little more complex than 
this and fully described in @ref{Implementing TAB completion in Gnuastro}.
 But this is a good exercise to get started.
 
-We Will use @command{asttable} as the demo, and the goal is to suggest all 
options that this program has to offer.
+We will use @command{asttable} as the demo, and the goal is to suggest all 
options that this program has to offer.
 You can print all of them (with a lot of extra information) with this command:
 
 @example
@@ -39233,7 +39234,7 @@ One way to catch the long options is through 
@command{awk} as shown below.
 We only keep the lines that 1) starting with an empty space, 2) their first 
no-white character is @samp{-} and that have the format of @samp{--} followed 
by any number of numbers or characters.
 Within those lines, if the first word ends in a comma (@samp{,}), the first 
word is the short option, so we want the second word (which is the long option).
 Otherwise, the first word is the long option.
-But for options that take a value, this will also include the format of the 
value (for example @option{--column=STR}).
+But for options that take a value, this will also include the format of the 
value (for example, @option{--column=STR}).
 So with a @command{sed} command, we remove everything that is after the equal 
sign, but keep the equal sign itself (to highlight to the user that this option 
should have a value).
 
 @example
@@ -39311,7 +39312,7 @@ That is important because the long format of an option, 
its value is more clear
 
 You have now written a very basic and working TAB completion script that can 
easily be generalized to include more options (and be good for a single/simple 
program).
 However, Gnuastro has many programs that share many similar things and the 
options are not independent.
-Also, complex situations do often come up: for example some people use a 
@file{.fit} suffix for FITS files and others do not even use a suffix at all!
+Also, complex situations do often come up: for example, some people use a 
@file{.fit} suffix for FITS files and others do not even use a suffix at all!
 So in practice, things need to get a little more complicated, but the core 
concept is what you learnt in this section.
 We just modularize the process (breaking logically independent steps into 
separate functions to use in different situations).
 In @ref{Implementing TAB completion in Gnuastro}, we will review the 
generalities of Gnuastro's implementation of Bash TAB completion.
@@ -39329,7 +39330,7 @@ As a result, Bash's TAB completion is implemented as 
multiple files in Gnuastro:
 @table @asis
 @item @file{bin/completion.bash.built} (in build directory, automatically 
created)
 This file contains the values of all Gnuastro options or arguments that take 
fixed strings as values (not file names).
-For example the names of Arithmetic's operators (see @ref{Arithmetic 
operators}), or spectral line names (like @option{--obsline} in 
@ref{CosmicCalculator input options}).
+for example, the names of Arithmetic's operators (see @ref{Arithmetic 
operators}), or spectral line names (like @option{--obsline} in 
@ref{CosmicCalculator input options}).
 
 This file is created automatically during the building of Gnuastro.
 The recipe to build it is available in Gnuastro's top-level @file{Makefile.am} 
(under the target @code{bin/completion.bash}).
@@ -39471,7 +39472,7 @@ registered on Savannah. When posting an issue to a 
tracker, it is very
 important to choose the `Category' and `Item Group' options accurately. The
 first contains a list of all Gnuastro's programs along with `Installation',
 `New program' and `Webpage'. The ``Item Group'' contains the nature of the
-issue, for example if it is a `Crash' in the software (a bug), or a problem
+issue, for example, if it is a `Crash' in the software (a bug), or a problem
 in the documentation (also a bug) or a feature request or an enhancement.
 
 The set of horizontal links on the top of the page (Starting with
@@ -39538,7 +39539,7 @@ to this item at:''. That link will take you directly to 
the issue
 discussion page, where you can read the discussion history or join it.
 
 While you are posting comments on the Savannah issues, be sure to update
-the meta-data. For example if the task/bug is not assigned to anyone and
+the meta-data. for example, if the task/bug is not assigned to anyone and
 you would like to take it, change the ``Assigned to'' box, or if you want
 to report that it has been applied, change the status and so on. All these
 changes will also be circulated with the email very clearly.
@@ -39809,7 +39810,7 @@ This is a tutorial on the second suggested method 
(commonly known as forking) th
 
 To start, please create an empty repository on your hosting service web page 
(we recommend Codeberg since it is fully free software@footnote{See 
@url{https://www.gnu.org/software/repo-criteria-evaluation.html} for an 
evaluation of the major existing repositories.
 Gnuastro uses GNU Savannah (which also has the highest ranking in the 
evaluation), but for starters, Codeberg may be easier (it is fully free 
software).}).
-If this is your first hosted repository on the web page, you also have to 
upload your public SSH key@footnote{For example see this explanation provided 
by Codeberg: @url{https://docs.codeberg.org/security/ssh-key}.} for the 
@command{git push} command below to work.
+If this is your first hosted repository on the web page, you also have to 
upload your public SSH key@footnote{for example, see this explanation provided 
by Codeberg: @url{https://docs.codeberg.org/security/ssh-key}.} for the 
@command{git push} command below to work.
 Here we will assume you use the name @file{janedoe} to refer to yourself 
everywhere and that you choose @file{gnuastro} as the name of your Gnuastro 
fork.
 Any online hosting service will give you an address (similar to the 
`@file{git@@codeberg.org:...}' below) of the empty repository you have created 
using their web page, use that address in the third line below.
 
@@ -39902,22 +39903,22 @@ Throughout this book, they are ordered based on their 
context, please see the to
 
 @item Arithmetic
 (@file{astarithmetic}, see @ref{Arithmetic}) For arithmetic operations on 
multiple (theoretically unlimited) number of datasets (images).
-It has a large and growing set of arithmetic, mathematical, and even 
statistical operators (for example @command{+}, @command{-}, @command{*}, 
@command{/}, @command{sqrt}, @command{log}, @command{min}, @command{average}, 
@command{median}, see @ref{Arithmetic operators}).
+It has a large and growing set of arithmetic, mathematical, and even 
statistical operators (for example, @command{+}, @command{-}, @command{*}, 
@command{/}, @command{sqrt}, @command{log}, @command{min}, @command{average}, 
@command{median}, see @ref{Arithmetic operators}).
 
 @item BuildProgram
 (@file{astbuildprog}, see @ref{BuildProgram}) Compile, link and run custom C 
programs that depend on the Gnuastro library (see @ref{Gnuastro library}).
 This program will automatically link with the libraries that Gnuastro depends 
on, so there is no need to explicitly mention them every time you are compiling 
a Gnuastro library dependent program.
 
 @item ConvertType
-(@file{astconvertt}, see @ref{ConvertType}) Convert astronomical data files 
(FITS or IMH) to and from several other standard image and data formats, for 
example TXT, JPEG, EPS or PDF.
-Optionally, it is also possible to add vector graphics markers over the output 
image (for example circles from catalogs containing RA or Dec).
+(@file{astconvertt}, see @ref{ConvertType}) Convert astronomical data files 
(FITS or IMH) to and from several other standard image and data formats, for 
example, TXT, JPEG, EPS or PDF.
+Optionally, it is also possible to add vector graphics markers over the output 
image (for example, circles from catalogs containing RA or Dec).
 
 @item Convolve
 (@file{astconvolve}, see @ref{Convolve}) Convolve (blur or smooth) data with a 
given kernel in spatial and frequency domain on multiple threads.
 Convolve can also do de-convolution to find the appropriate kernel to 
PSF-match two images.
 
 @item CosmicCalculator
-(@file{astcosmiccal}, see @ref{CosmicCalculator}) Do cosmological 
calculations, for example the luminosity distance, distance modulus, comoving 
volume and many more.
+(@file{astcosmiccal}, see @ref{CosmicCalculator}) Do cosmological 
calculations, for example, the luminosity distance, distance modulus, comoving 
volume and many more.
 
 @item Crop
 (@file{astcrop}, see @ref{Crop}) Crop region(s) from one or many image(s) and 
stitch several images if necessary.
@@ -40082,7 +40083,7 @@ export XPA_METHOD=local
 @end example
 
 @item
-Your system may not have the SSL library in its standard library path, in this 
case, put this command in your startup file (for example @file{~/.bashrc}):
+Your system may not have the SSL library in its standard library path, in this 
case, put this command in your startup file (for example, @file{~/.bashrc}):
 
 @example
 export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/ssl/lib"
@@ -40159,7 +40160,7 @@ If you want your plotting codes in between your C 
program, PGPLOT is
 currently one of your best options. The recommended alternative to
 this method is to get the raw data for the plots in text files and
 input them into any of the various more modern and capable plotting
-tools separately, for example the Matplotlib library in Python or
+tools separately, for example, the Matplotlib library in Python or
 PGFplots in @LaTeX{}. This will also significantly help code
 readability. Let's get back to PGPLOT for the sake of
 WCSLIB. Installing it is a little tricky (mainly because it is so



reply via email to

[Prev in Thread] Current Thread [Next in Thread]