gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 85c0cf1f: Book: fixed various typos and minor


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 85c0cf1f: Book: fixed various typos and minor grammatical mistakes
Date: Sat, 27 Aug 2022 12:49:26 -0400 (EDT)

branch: master
commit 85c0cf1fe0349a8c5542364de144da6ef6b6bfbf
Author: Marjan Akbari <mrjakbari@gmail.com>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    Book: fixed various typos and minor grammatical mistakes
    
    Until now, there were several common grammatical issues in the whole text
    (for example using "n't" instead of "not"), and several typos or non-clear
    sentences in the tutorials.
    
    With this commit, they have been corrected.
---
 doc/gnuastro.texi | 2291 ++++++++++++++++++++++++++---------------------------
 1 file changed, 1142 insertions(+), 1149 deletions(-)

diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index a84ba89b..7464cf81 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -784,7 +784,7 @@ Gnuastro library
 
 Multithreaded programming (@file{threads.h})
 
-* Implementation of pthread_barrier::  Some systems don't have pthread_barrier
+* Implementation of pthread_barrier::  Some systems do not have pthread_barrier
 * Gnuastro's thread related functions::  Functions for managing threads.
 
 Data container (@file{data.h})
@@ -924,21 +924,21 @@ As discussed in @ref{Science and its tools} this is a 
founding principle of the
 @cindex Source, uncompress
 The latest official release tarball is always available as 
@url{http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz, 
@file{gnuastro-latest.tar.lz}}.
 The @url{http://www.nongnu.org/lzip/lzip.html, Lzip} format is used for better 
compression (smaller output size, thus faster download), and robust archival 
features and standards.
-For historical reasons (those users that don't yet have Lzip), the Gzip'd 
tarball@footnote{The Gzip library and program are commonly available on most 
systems.
+For historical reasons (those users that do not yet have Lzip), the Gzip'd 
tarball@footnote{The Gzip library and program are commonly available on most 
systems.
 However, Gnuastro recommends Lzip as described above and the beta-releases are 
also only distributed in @file{tar.lz}.} is available at the same URL (just 
change the @file{.lz} suffix above to @file{.gz}; however, the Lzip'd file is 
recommended).
 See @ref{Release tarball} for more details on the tarball release.
 
 Let's assume the downloaded tarball is in the @file{TOPGNUASTRO} directory.
 You can follow the commands below to download and un-compress the Gnuastro 
source.
 You need to have the @command{lzip} program for the decompression (see 
@ref{Dependencies from package managers})
-If your Tar implementation doesn't recognize Lzip (the third command fails), 
run the fourth command.
-Note that lines starting with @code{##} don't need to be typed (they are only 
a description of the following command):
+If your Tar implementation does not recognize Lzip (the third command fails), 
run the fourth command.
+Note that lines starting with @code{##} do not need to be typed (they are only 
a description of the following command):
 
 @example
 ## Go into the download directory.
 $ cd TOPGNUASTRO
 
-## If you don't already have the tarball, you can download it:
+## If you do not already have the tarball, you can download it:
 $ wget http://ftp.gnu.org/gnu/gnuastro/gnuastro-latest.tar.lz
 
 ## If this fails, run the next command.
@@ -1164,7 +1164,7 @@ But on the right, the image gets pixelated, and we only 
see the parts that are w
 The pixels that are more near to the center of the galaxy (which is brighter) 
are also larger.
 But as we follow the spiral arms (and get more distant from the center), the 
pixels get smaller (signifying less signal).
 
-This sharp distinction between the contiguous and pixelated view of the galaxy 
signifies the main struggle in science: in the ``real'' world, objects aren't 
pixelated or discrete and have no noise.
+This sharp distinction between the contiguous and pixelated view of the galaxy 
signifies the main struggle in science: in the ``real'' world, objects are not 
pixelated or discrete and have no noise.
 However, when we observe nature, we are confined and constrained by the 
resolution of our data collection (CCD imager in this case).
 
 On the other hand, we read English text from the left and progress towards the 
right.
@@ -1247,8 +1247,8 @@ This will simultaneously improve performance and 
transparency.
 Shell scripting (or Makefiles) are also basic constructs that are easy to 
learn and readily available as part of the Unix-like operating systems.
 If there is no program to do a desired step, Gnuastro's libraries can be used 
to build specific programs.
 
-The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it doesn't belong to any 
particular one of them), much like how for-profit organizations (for example 
RedHat, or Intel and many others) are major contributors to free and open 
source software for their shared benefit.
-Gnuastro's copyright has been fully awarded to GNU, so it doesn't belong to 
any particular astronomer or astronomical facility or project.
+The main factor is that all observatories or projects can freely contribute to 
Gnuastro and all simultaneously benefit from it (since it does not belong to 
any particular one of them), much like how for-profit organizations (for 
example RedHat, or Intel and many others) are major contributors to free and 
open source software for their shared benefit.
+Gnuastro's copyright has been fully awarded to GNU, so it does not belong to 
any particular astronomer or astronomical facility or project.
 
 
 
@@ -1260,9 +1260,9 @@ Gnuastro's copyright has been fully awarded to GNU, so it 
doesn't belong to any
 Some astronomers initially install and use a GNU/Linux operating system 
because their necessary tools can only be installed in this environment.
 However, the transition is not necessarily easy.
 To encourage you in investing the patience and time to make this transition, 
and actually enjoy it, we will first start with a basic introduction to 
GNU/Linux operating systems.
-Afterwards, in @ref{Command-line interface} we'll discuss the wonderful 
benefits of the command-line interface, how it beautifully complements the 
graphic user interface, and why it is worth the (apparently steep) learning 
curve.
+Afterwards, in @ref{Command-line interface} we will discuss the wonderful 
benefits of the command-line interface, how it beautifully complements the 
graphic user interface, and why it is worth the (apparently steep) learning 
curve.
 Finally a complete chapter (@ref{Tutorials}) is devoted to real world 
scenarios of using Gnuastro (on the command-line).
-Therefore if you don't yet feel comfortable with the command-line we strongly 
recommend going through that chapter after finishing this section.
+Therefore if you do not yet feel comfortable with the command-line we strongly 
recommend going through that chapter after finishing this section.
 
 You might have already noticed that we are not using the name ``Linux'', but 
``GNU/Linux''.
 Please take the time to have a look at the following essays and FAQs for a 
complete understanding of this very important distinction.
@@ -1303,7 +1303,7 @@ It is GNU Bash that then talks to kernel.
 
 To better clarify, let's use this analogy inspired from one of the links 
above@footnote{https://www.gnu.org/gnu/gnu-users-never-heard-of-gnu.html}: 
saying that you are ``running Linux'' is like saying you are ``driving your 
engine''.
 The car's engine is the main source of power in the car, no one doubts that.
-But you don't ``drive'' the engine, you drive the ``car''.
+But you do not ``drive'' the engine, you drive the ``car''.
 Without the radiator, battery, transmission, wheels, chassis, seats, 
wind-shield and etc, the engine alone is useless for transportation!
 
 @cindex Window Subsystem for Linux
@@ -1330,7 +1330,7 @@ Therefore to acknowledge GNU's instrumental role in the 
creation and usage of th
 @cindex CLI: command-line user interface
 One aspect of Gnuastro that might be a little troubling to new GNU/Linux users 
is that (at least for the time being) it only has a command-line user interface 
(CLI).
 This might be contrary to the mostly graphical user interface (GUI) experience 
with proprietary operating systems.
-Since the various actions available aren't always on the screen, the 
command-line interface can be complicated, intimidating, and frustrating for a 
first-time user.
+Since the various actions available are not always on the screen, the 
command-line interface can be complicated, intimidating, and frustrating for a 
first-time user.
 This is understandable and also experienced by anyone who started using the 
computer (from childhood) in a graphical user interface (this includes most of 
Gnuastro's authors).
 Here we hope to convince you of the unique benefits of this interface which 
can greatly enhance your productivity while complementing your GUI experience.
 
@@ -1338,8 +1338,8 @@ Here we hope to convince you of the unique benefits of 
this interface which can
 Through GNOME 3@footnote{@url{http://www.gnome.org/}}, most GNU/Linux based 
operating systems now have an advanced and useful GUI.
 Since the GUI was created long after the command-line, some wrongly consider 
the command line to be obsolete.
 Both interfaces are useful for different tasks.
-For example you can't view an image, video, pdf document or web page on the 
command-line.
-On the other hand you can't reproduce your results easily in the GUI.
+For example you cannot view an image, video, pdf document or web page on the 
command-line.
+On the other hand you cannot reproduce your results easily in the GUI.
 Therefore they should not be regarded as rivals but as complementary user 
interfaces, here we will outline how the CLI can be useful in scientific 
programs.
 
 You can think of the GUI as a veneer over the CLI to facilitate a small subset 
of all the possible CLI operations.
@@ -1348,7 +1348,7 @@ So asymptotically (if a good designer can design a GUI 
which is able to show you
 In practice, such graphical designers are very hard to find for every program, 
so the GUI operations are always a subset of the internal CLI commands.
 For programs that are only made for the GUI, this results in not including 
lots of potentially useful operations.
 It also results in `interface design' to be a crucially important part of any 
GUI program.
-Scientists don't usually have enough resources to hire a graphical designer, 
also the complexity of the GUI code is far more than CLI code, which is harmful 
for a scientific software, see @ref{Science and its tools}.
+Scientists do not usually have enough resources to hire a graphical designer, 
also the complexity of the GUI code is far more than CLI code, which is harmful 
for a scientific software, see @ref{Science and its tools}.
 
 @cindex GUI: repeating operations
 For programs that have a GUI, one action on the GUI (moving and clicking a 
mouse, or tapping a touchscreen) might be more efficient and easier than its 
CLI counterpart (typing the program name and your desired configuration).
@@ -1358,16 +1358,16 @@ Unless the designers of a particular program decided to 
design such a system for
 @cindex GNU Bash
 @cindex Reproducible results
 @cindex CLI: repeating operations
-On the command-line, you can run any series of actions which can come from 
various CLI capable programs you have decided your self in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example see the tutorials in @ref{Tutorials}.}.
+On the command-line, you can run any series of actions which can come from 
various CLI capable programs you have decided yourself in any possible 
permutation with one command@footnote{By writing a shell script and running it, 
for example see the tutorials in @ref{Tutorials}.}.
 This allows for much more creativity and exact reproducibility that is not 
possible to a GUI user.
 For technical and scientific operations, where the same operation (using 
various programs) has to be done on a large set of data files, this is 
crucially important.
 It also allows exact reproducibility which is a foundation principle for 
scientific results.
 The most common CLI (which is also known as a shell) in GNU/Linux is GNU Bash, 
we strongly encourage you to put aside several hours and go through this 
beautifully explained web page: @url{https://flossmanuals.net/command-line/}.
-You don't need to read or even fully understand the whole thing, only a 
general knowledge of the first few chapters are enough to get you going.
+You do not need to read or even fully understand the whole thing, only a 
general knowledge of the first few chapters are enough to get you going.
 
-Since the operations in the GUI are limited and they are visible, reading a 
manual is not that important in the GUI (most programs don't even have any!).
+Since the operations in the GUI are limited and they are visible, reading a 
manual is not that important in the GUI (most programs do not even have any!).
 However, to give you the creative power explained above, with a CLI program, 
it is best if you first read the manual of any program you are using.
-You don't need to memorize any details, only an understanding of the 
generalities is needed.
+You do not need to memorize any details, only an understanding of the 
generalities is needed.
 Once you start working, there are more easier ways to remember a particular 
option or operation detail, see @ref{Getting help}.
 
 @cindex GNU Emacs
@@ -1423,9 +1423,9 @@ Recently corrected bugs are probably not yet publicly 
released because they are
 If the bug is solved but not yet released and it is an urgent issue for you, 
you can get the version controlled source and compile that, see @ref{Version 
controlled source}.
 
 To solve the issue as readily as possible, please follow the following to 
guidelines in your bug report.
-The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report 
Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How 
To Ask Questions The Smart Way} essays also provide some good generic advice 
for all software (don't contact their authors for Gnuastro's problems).
+The @url{http://www.chiark.greenend.org.uk/~sgtatham/bugs.html, How to Report 
Bugs Effectively} and @url{http://catb.org/~esr/faqs/smart-questions.html, How 
To Ask Questions The Smart Way} essays also provide some good generic advice 
for all software (do not contact their authors for Gnuastro's problems).
 Mastering the art of giving good bug reports (like asking good questions) can 
greatly enhance your experience with any free and open source software.
-So investing the time to read through these essays will greatly reduce your 
frustration after you see something doesn't work the way you feel it is 
supposed to for a large range of software, not just Gnuastro.
+So investing the time to read through these essays will greatly reduce your 
frustration after you see something does not work the way you feel it is 
supposed to for a large range of software, not just Gnuastro.
 
 @table @strong
 
@@ -1509,7 +1509,7 @@ Also, be sure to set the ``Open/Closed'' value to ``Any''.
 If the feature you want to suggest is not already listed in the task manager, 
then follow the steps that are fully described in @ref{Report a bug}.
 Please have in mind that the developers are all busy with their own 
astronomical research, and implementing existing ``task''s to add or resolve 
bugs.
 Gnuastro is a volunteer effort and none of the developers are paid for their 
hard work.
-So, although we will try our best, please don't expect for your suggested 
feature to be immediately included (for the next release of Gnuastro).
+So, although we will try our best, please do not expect for your suggested 
feature to be immediately included (for the next release of Gnuastro).
 
 The best person to apply the exciting new feature you have in mind is you, 
since you have the motivation and need.
 In fact, Gnuastro is designed for making it as easy as possible for you to 
hack into it (add new features, change existing ones and so on), see 
@ref{Science and its tools}.
@@ -1554,13 +1554,13 @@ In this book we have the following conventions:
 @item
 All commands that are to be run on the shell (command-line) prompt as the user 
start with a @command{$}.
 In case they must be run as a super-user or system administrator, they will 
start with a single @command{#}.
-If the command is in a separate line and next line @code{is also in the code 
type face}, but doesn't have any of the @command{$} or @command{#} signs, then 
it is the output of the command after it is run.
-As a user, you don't need to type those lines.
+If the command is in a separate line and next line @code{is also in the code 
type face}, but does not have any of the @command{$} or @command{#} signs, then 
it is the output of the command after it is run.
+As a user, you do not need to type those lines.
 A line that starts with @command{##} is just a comment for explaining the 
command to a human reader and must not be typed.
 
 @item
 If the command becomes larger than the page width a @key{\} is inserted in the 
code.
-If you are typing the code by hand on the command-line, you don't need to use 
multiple lines or add the extra space characters, so you can omit them.
+If you are typing the code by hand on the command-line, you do not need to use 
multiple lines or add the extra space characters, so you can omit them.
 If you want to copy and paste these examples (highly discouraged!) then the 
@key{\} should stay.
 
 The @key{\} character is a shell escape character which is used commonly to 
make characters which have special meaning for the shell, lose that special 
meaning (the shell will not treat them especially if there is a @key{\} behind 
them).
@@ -1588,7 +1588,7 @@ $ astnoisechisel --cite
 $ astmkcatalog --cite
 @end example
 
-Here, we'll acknowledge all the institutions (and their grants) along with the 
people who helped make Gnuastro possible.
+Here, we will acknowledge all the institutions (and their grants) along with 
the people who helped make Gnuastro possible.
 The full list of Gnuastro authors is available at the start of this book and 
the @file{AUTHORS} file in the source code (both are generated automatically 
from the version controlled history).
 The plain text file @file{THANKS}, which is also distributed along with the 
source code, contains the list of people and institutions who played an 
indirect role in Gnuastro (not committed any code in the Gnuastro version 
controlled history).
 
@@ -1762,7 +1762,7 @@ The fourth tutorial (@ref{Sufi simulates a detection}) 
focuses on simulating ast
 The ultimate aim of @ref{General program usage tutorial} is to detect galaxies 
in a deep HST image, measure their positions and brightness and select those 
with the strongest colors.
 In the process, it takes many detours to introduce you to the useful 
capabilities of many of the programs.
 So please be patient in reading it.
-If you don't have much time and can only try one of the tutorials, we 
recommend this one.
+If you do not have much time and can only try one of the tutorials, we 
recommend this one.
 
 @cindex PSF
 @cindex Point spread function
@@ -1812,11 +1812,11 @@ After this tutorial, you can also try the 
@ref{Detecting large extended targets}
 
 During the tutorial, we will take many detours to explain, and practically 
demonstrate, the many capabilities of Gnuastro's programs.
 In the end you will see that the things you learned during this tutorial are 
much more generic than this particular problem and can be used in solving a 
wide variety of problems involving the analysis of data (images or tables).
-So please don't rush, and go through the steps patiently to optimally master 
Gnuastro.
+So please do not rush, and go through the steps patiently to optimally master 
Gnuastro.
 
 @cindex XDF survey
 @cindex eXtreme Deep Field (XDF) survey
-In this tutorial, we'll use the HST@url{https://archive.stsci.edu/prepds/xdf, 
eXtreme Deep Field} dataset.
+In this tutorial, we will use the 
HST@url{https://archive.stsci.edu/prepds/xdf, eXtreme Deep Field} dataset.
 Like almost all astronomical surveys, this dataset is free for download and 
usable by the public.
 You will need the following tools in this tutorial: Gnuastro, SAO DS9 
@footnote{See @ref{SAO DS9}, available at 
@url{http://ds9.si.edu/site/Home.html}}, GNU 
Wget@footnote{@url{https://www.gnu.org/software/wget}}, and AWK (most common 
implementation is GNU AWK@footnote{@url{https://www.gnu.org/software/gawk}}).
 
@@ -1827,7 +1827,7 @@ We are very grateful to the organizers of these workshops 
and the attendees for
 @cartouche
 @noindent
 @strong{Write the example commands manually:} Try to type the example commands 
on your terminal manually and use the history feature of your command-line (by 
pressing the ``up'' button to retrieve previous commands).
-Don't simply copy and paste the commands shown here.
+Do Not simply copy and paste the commands shown here.
 This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
@@ -1871,7 +1871,7 @@ $ ast<TAB><TAB>
 @noindent
 Any program that starts with @code{ast} (including all Gnuastro programs) will 
be shown.
 By choosing the subsequent characters of your desired program and pressing 
@key{<TAB><TAB>} again, the list will narrow down and the program name will 
auto-complete once your input characters are unambiguous.
-In short, you often don't need to type the full name of the program you want 
to run.
+In short, you often do not need to type the full name of the program you want 
to run.
 
 @node Accessing documentation, Setup and data download, Calling Gnuastro's 
programs, General program usage tutorial
 @subsection Accessing documentation
@@ -1879,13 +1879,13 @@ In short, you often don't need to type the full name of 
the program you want to
 Gnuastro contains a large number of programs and it is natural to forget the 
details of each program's options or inputs and outputs.
 Therefore, before starting the analysis steps of this tutorial, let's review 
how you can access this book to refresh your memory any time you want, without 
having to take your hands off the keyboard.
 
-When you install Gnuastro, this book is also installed on your system along 
with all the programs and libraries, so you don't need an internet connection 
to access/read it.
+When you install Gnuastro, this book is also installed on your system along 
with all the programs and libraries, so you do not need an internet connection 
to access/read it.
 Also, by accessing this book as described below, you can be sure that it 
corresponds to your installed version of Gnuastro.
 
 @cindex GNU Info
 GNU Info@footnote{GNU Info is already available on almost all Unix-like 
operating systems.} is the program in charge of displaying the manual on the 
command-line (for more, see @ref{Info}).
 To see this whole book on your command-line, please run the following command 
and press subsequent keys.
-Info has its own mini-environment, therefore we'll show the keys that must be 
pressed in the mini-environment after a @code{->} sign.
+Info has its own mini-environment, therefore we will show the keys that must 
be pressed in the mini-environment after a @code{->} sign.
 You can also ignore anything after the @code{#} sign in the middle of the 
line, they are only for your information.
 
 @example
@@ -1903,7 +1903,7 @@ Then follow a few more links and go deeper into the book.
 To return to the previous page, press @key{l} (small L).
 If you are searching for a specific phrase in the whole book (for example an 
option name), press @key{s} and type your search phrase and end it with an 
@key{<ENTER>}.
 
-You don't need to start from the top of the manual every time.
+You do not need to start from the top of the manual every time.
 For example, to get to @ref{Invoking astnoisechisel}, run the following 
command.
 In general, all programs have such an ``Invoking ProgramName'' section in this 
book.
 These sections are specifically for the description of inputs, outputs and 
configuration options of each program.
@@ -1913,7 +1913,7 @@ You can access them directly for each program by giving 
its executable name to I
 $ info astnoisechisel
 @end example
 
-The other sections don't have such shortcuts.
+The other sections do not have such shortcuts.
 To directly access them from the command-line, you need to tell Info to look 
into Gnuastro's manual, then look for the specific section (an unambiguous 
title is necessary).
 For example, if you only want to review/remember NoiseChisel's @ref{Detection 
options}), just run the following command.
 Note how case is irrelevant for Info when calling a title in this manner.
@@ -1963,7 +1963,7 @@ $ ln -s XDFDIR download
 @end example
 
 @noindent
-Otherwise, when the following images aren't already present on your system, 
you can make a @file{download} directory and download them there.
+Otherwise, when the following images are not already present on your system, 
you can make a @file{download} directory and download them there.
 
 @example
 $ mkdir download
@@ -1976,7 +1976,7 @@ $ cd ..
 @end example
 
 @noindent
-In this tutorial, we'll just use these three filters.
+In this tutorial, we will just use these three filters.
 Later, you may need to download more filters.
 To do that, you can use the shell's @code{for} loop to download them all in 
series (one after the other@footnote{Note that you only have one port to the 
internet, so downloading in parallel will actually be slower than downloading 
in series.}) with one command like the one below for the WFC3 filters.
 Put this command instead of the three @code{wget} commands above.
@@ -1995,7 +1995,7 @@ $ for f in f105w f125w f140w f160w; do \
 First, let's visually inspect the datasets we downloaded in @ref{Setup and 
data download}.
 Let's take F160W image as an example.
 One of the most common programs for viewing FITS images is SAO DS9, which is 
usually called through the @command{ds9} command-line program, like the command 
below.
-If you don't already have DS9 on your computer and the command below fails, 
please see @ref{SAO DS9}.
+If you do not already have DS9 on your computer and the command below fails, 
please see @ref{SAO DS9}.
 
 @example
 $ ds9 download/hlsp_xdf_hst_wfc3ir-60mas_hudf_f160w_v1_sci.fits
@@ -2011,7 +2011,7 @@ $ astscript-fits-view \
 @end example
 
 After running this command, you will see that the DS9 window fully covers the 
height of your monitor, it is showing the whole image, using a more clear 
color-map, and many more useful things.
-Infact, you see the DS9 command it used in your terminal@footnote{When 
comparing DS9's command-line options to Gnuastro's, you will notice how SAO DS9 
doesn't follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example 
@option{--width} and @option{-w}, see @ref{Options}).}.
+Infact, you see the DS9 command it used in your terminal@footnote{When 
comparing DS9's command-line options to Gnuastro's, you will notice how SAO DS9 
does not follow the GNU style of options where ``long'' and ``short'' options 
are preceded by @option{--} and @option{-} respectively (for example 
@option{--width} and @option{-w}, see @ref{Options}).}.
 On GNU/Linux operating systems (like Ubuntu, Fedora and etc), you can also set 
your graphics user interface to use this script for opening FITS files when you 
click on them.
 For more, see the instructions in the checklist at the start of @ref{Invoking 
astscript-fits-view}.
 
@@ -2021,7 +2021,7 @@ The next thing might be that the dataset actually has two 
``depth''s (see @ref{Q
 Recall that this is a combined/reduced image of many exposures, and the parts 
that have more exposures are deeper.
 In particular, the exposure time of the deep inner region is more than 4 times 
the exposure time of the outer (more shallower) parts.
 
-To simplify the analysis in this tutorial, we'll only be working on the deep 
field, so let's crop it out of the full dataset.
+To simplify the analysis in this tutorial, we will only be working on the deep 
field, so let's crop it out of the full dataset.
 Fortunately the XDF survey web page (above) contains the vertices of the deep 
flat WFC3-IR 
field@footnote{@url{https://archive.stsci.edu/prepds/xdf/#dataproducts}}.
 With Gnuastro's Crop program@footnote{To learn more about the crop program see 
@ref{Crop}.}, you can use those vertices to cutout this deep region from the 
larger image.
 But before that, to keep things organized, let's make a directory called 
@file{flat-ir} and keep the flat (single-depth) regions in that directory (with 
a `@file{xdf-}' suffix for a shorter and easier filename).
@@ -2051,18 +2051,18 @@ Run the command below to have a look at the cropped 
images:
 $ astscript-fits-view flat-ir/xdf-f160w.fits
 @end example
 
-You only see the deep region now, doesn't the noise look much cleaner?
+You only see the deep region now, does not the noise look much cleaner?
 An important result of this crop is that regions with no data now have a NaN 
(Not-a-Number, or a blank value) value.
 Any self-respecting statistical program will ignore NaN values, so they will 
not affect your outputs.
-For example, notice how changing the DS9 color bar will not affect the NaN 
pixels (their colors won't change).
+For example, notice how changing the DS9 color bar will not affect the NaN 
pixels (their colors will not change).
 
 However, do you remember that in the downloaded files, such regions had a 
value of zero?
 That is a big problem!
-Because zero is a number, and is thus meaningful, especially when you later 
want to NoiseChisel to detect@footnote{As you will see below, unlike most other 
detection algorithms, NoiseChisel detects the objects from their faintest 
parts, it doesn't start with their high signal-to-noise ratio peaks.
+Because zero is a number, and is thus meaningful, especially when you later 
want to NoiseChisel to detect@footnote{As you will see below, unlike most other 
detection algorithms, NoiseChisel detects the objects from their faintest 
parts, it does not start with their high signal-to-noise ratio peaks.
 Since the Sky is already subtracted in many images and noise fluctuates around 
zero, zero is commonly higher than the initial threshold applied.
 Therefore not ignoring zero-valued pixels in this image, will cause them to 
part of the detections!} all the signal from the deep universe in this image.
 Generally, when you want to ignore some pixels in a dataset, and avoid 
higher-level ambiguities or complications, it is always best to give them blank 
values (not zero, or some other absurdly large or small number).
-Gnuastro has the Arithmetic program for such cases, and we'll introduce it 
later in this tutorial.
+Gnuastro has the Arithmetic program for such cases, and we will introduce it 
later in this tutorial.
 
 In the example above, the polygon vertices are in degrees, but you can also 
replace them with sexagesimal coordinates (for example using 
@code{03h32m44.9794} or @code{03:32:44.9794} instead of @code{53.187414} 
instead of the first RA and @code{-27d46m44.9472} or @code{-27:46:44.9472} 
instead of the first Dec).
 To further simplify things, you can even define your polygon visually as a DS9 
``region'', save it as a ``region file'' and give that file to crop.
@@ -2071,7 +2071,7 @@ But we need to continue, so if you are interested to 
learn more, see @ref{Crop}.
 Before closing this section, let's just take a look at the three cropping 
commands we ran above.
 The only thing varying in the three commands the filter name!
 Note how everything else is the same!
-In such cases, you should generally avoid repeating a command manually, it is 
prone to @emph{many} bugs, and as you see, it is very hard to read (didn't you 
suddenly write a @code{7} as an @code{8}?).
+In such cases, you should generally avoid repeating a command manually, it is 
prone to @emph{many} bugs, and as you see, it is very hard to read (did not you 
suddenly write a @code{7} as an @code{8}?).
 
 To simplify the command, and allow you to work on more filters, we can use the 
shell's @code{for} loop as shown below.
 Notice how the place where the filter names (@file{f105w}, @file{f125w} and 
@file{f160w}) are used above, have been replaced with @file{$f} (the shell 
variable that @code{for} will update in every loop) below.
@@ -2094,7 +2094,7 @@ $ for f in f105w f125w f160w; do \
 @cindex Scales, coordinate
 This is the deepest image we currently have of the sky.
 The first thing that comes to mind may be this: ``How large is this field on 
the sky?''.
-You can get a fast and crude answer with Gnuastro's Fits program using this 
command:
+You can get a fast and crude answer with Gnuastro's Fits program, using this 
command:
 
 @example
 astfits flat-ir/xdf-f160w.fits --skycoverage
@@ -2103,7 +2103,7 @@ astfits flat-ir/xdf-f160w.fits --skycoverage
 It will print the sky coverage in two formats (all numbers are in units of 
degrees for this image): 1) the image's central RA and Dec and full width 
around that center, 2) the range of RA and Dec covered by this image.
 You can use these values in various online query systems.
 You can also use this option to automatically calculate the area covered by 
this image.
-With the @option{--quiet} option, the printed output of @option{--skycoverage} 
will not contain human-readable text, making it easier for further processing:
+With the @option{--quiet} option, the printed output of @option{--skycoverage} 
will not contain human-readable text, making it easier for automatic (computer) 
processing:
 
 @example
 astfits flat-ir/xdf-f160w.fits --skycoverage --quiet
@@ -2111,7 +2111,7 @@ astfits flat-ir/xdf-f160w.fits --skycoverage --quiet
 
 The second row is the coverage range along RA and Dec (compare with the 
outputs before using @option{--quiet}).
 We can thus simply subtract the second from the first column and multiply it 
with the difference of the fourth and third columns to calculate the image area.
-We'll also multiply each by 60 to have the area in arc-minutes squared.
+We Will also multiply each by 60 to have the area in arc-minutes squared.
 
 @example
 astfits flat-ir/xdf-f160w.fits --skycoverage --quiet \
@@ -2138,28 +2138,28 @@ The @code{CUNIT*} keywords (for ``Coordinate UNIT'') 
have the answer.
 In this case, both @code{CUNIT1} and @code{CUNIT1} have a value of @code{deg}, 
so both ``world'' coordinates are in units of degrees.
 We can thus conclude that the value of @code{CDELT*} is in units of 
degrees-per-pixel@footnote{With the FITS @code{CDELT} convention, rotation 
(@code{PC} or @code{CD} keywords) and scales (@code{CDELT}) are separated.
 In the FITS standard the @code{CDELT} keywords are optional.
-When @code{CDELT} keywords aren't present, the @code{PC} matrix is assumed to 
contain @emph{both} the coordinate rotation and scales.
+When @code{CDELT} keywords are not present, the @code{PC} matrix is assumed to 
contain @emph{both} the coordinate rotation and scales.
 Note that not all FITS writers use the @code{CDELT} convention.
 So you might not find the @code{CDELT} keywords in the WCS meta data of some 
FITS files.
-However, all Gnuastro programs (which use the default FITS keyword writing 
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even 
if the input doesn't have it.
-If your dataset doesn't use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example Arithmetic) and the output will have 
the @code{CDELT} keyword.
+However, all Gnuastro programs (which use the default FITS keyword writing 
format of WCSLIB) write their output WCS with the @code{CDELT} convention, even 
if the input does not have it.
+If your dataset does not use the @code{CDELT} convention, you can feed it to 
any (simple) Gnuastro program (for example Arithmetic) and the output will have 
the @code{CDELT} keyword.
 See Section 8 of the 
@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf, FITS 
standard} for more}.
 
-With the commands below, we'll use @code{CDELT} (along with the number of 
non-blank pixels) to find the answer of our initial question: ``how much of the 
sky does this image cover?''.
+With the commands below, we will use @code{CDELT} (along with the number of 
non-blank pixels) to find the answer of our initial question: ``how much of the 
sky does this image cover?''.
 The lines starting with @code{##} are just comments for you to read and 
understand each command.
-Don't type them on the terminal (no problem if you do, they will just not have 
any effect).
+Do Not type them on the terminal (no problem if you do, they will just not 
have any effect).
 The commands are intentionally repetitive in some places to better understand 
each step and also to demonstrate the beauty of command-line features like 
history, variables, pipes and loops (which you will commonly use as you become 
more proficient on the command-line).
 
 @cartouche
 @noindent
-@strong{Use shell history:} Don't forget to make effective use of your shell's 
history: you don't have to re-type previous command to add something to them 
(like the examples below).
+@strong{Use shell history:} Do Not forget to make effective use of your 
shell's history: you do not have to re-type previous command to add something 
to them (like the examples below).
 This is especially convenient when you just want to make a small change to 
your previous command.
 Press the ``up'' key on your keyboard (possibly multiple times) to see your 
previous command(s) and modify them accordingly.
 @end cartouche
 
 @cartouche
 @noindent
-@strong{Your locale doesn't use `.' as decimal separator:} on systems that 
don't use an English language environment, dates, numbers and etc can be 
printed in different formats (for example `0.5' can be written as `0,5': with a 
comma).
+@strong{Your locale does not use `.' as decimal separator:} on systems that do 
not use an English language environment, dates, numbers and etc can be printed 
in different formats (for example `0.5' can be written as `0,5': with a comma).
 With the @code{LC_NUMERIC} line at the start of the script below, we are 
ensuring a unified format in the output of @command{seq}.
 For more, please see @ref{Numeric locale}.
 @end cartouche
@@ -2189,10 +2189,10 @@ $ astfits flat-ir/xdf-f160w.fits -h1
 $ astfits flat-ir/xdf-f160w.fits -h1 | grep CDELT
 
 ## Since the resolution of both dimensions is (approximately) equal,
-## we'll only read the value of one (CDELT1) with '--keyvalue'.
+## we will only read the value of one (CDELT1) with '--keyvalue'.
 $ astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1
 
-## We don't need the file name in the output (add '--quiet').
+## We do not need the file name in the output (add '--quiet').
 $ astfits flat-ir/xdf-f160w.fits -h1 --keyvalue=CDELT1 --quiet
 
 ## Save it as the shell variable `r'.
@@ -2210,7 +2210,7 @@ $ echo $n $r | awk '@{print $1 * ($2*60)^2@}'
 The output of the last command (area of this field) is 4.03817 (or 
approximately 4.04) arc-minutes squared.
 Just for comparison, this is roughly 175 times smaller than the average moon's 
angular area (with a diameter of 30 arc-minutes or half a degree).
 
-Some FITS writers don't use the @code{CDELT} convention, making it hard to use 
the steps above.
+Some FITS writers do not use the @code{CDELT} convention, making it hard to 
use the steps above.
 In such cases, you can extract the pixel scale with the @option{--pixelscale} 
option of Gnuastro's Fits program like the command below.
 Similar to the @option{--skycoverage} option above, you can also use the 
@option{--quiet} option to allow easy usage of the values in scripts.
 
@@ -2254,7 +2254,7 @@ $ astcosmiccal -z2 --help
 ## in units of kpc/arc-seconds.
 $ astcosmiccal -z2 --arcsectandist
 
-## Its easier to use the short (single character) version of
+## It is easier to use the short (single character) version of
 ## this option when typing (but this is hard to read, so use
 ## the long version in scripts or notes you plan to archive).
 $ astcosmiccal -z2 -s
@@ -2310,11 +2310,11 @@ The first is the redshift, and the second is the area 
of this image at that reds
 Now, have a look at the first few values.
 At @mymath{z=0.1} and @mymath{z=0.5}, this image covers @mymath{0.05 Mpc^2} 
and @mymath{0.57 Mpc^2} respectively.
 This increase of coverage with redshift is expected because a fixed angle will 
cover a larger tangential area at larger distances.
-However, as you come down the list (to higher redshifts) you will notice that 
this relation doesn't hold!
+However, as you come down the list (to higher redshifts) you will notice that 
this relation does not hold!
 The largest coverage is at @mymath{z=1.6}: at higher redshifts, the area 
decreases, and continues decreasing!!!
 In @mymath{\Lambda{}CDM} cosmology, this happens because of the finite speed 
of light and the expansion of the universe, see 
@url{https://en.wikipedia.org/wiki/Angular_diameter_distance#Angular_diameter_turnover_point,
 the Wikipedia page}.
 
-In case you have TOPCAT, you can visualize this as a plot (if you don't have 
TOPCAT, see @ref{TOPCAT}).
+In case you have TOPCAT, you can visualize this as a plot (if you do not have 
TOPCAT, see @ref{TOPCAT}).
 To do so, first you need to save the output of the loop above into a FITS 
table by piping the output to Gnuastro's Table program and giving an output 
name:
 
 @example
@@ -2432,8 +2432,8 @@ Gnuastro's library and BuildProgram are created to make 
it easy for you to use t
 This gives you a high level of creativity, while also providing efficiency and 
robustness.
 Several other complete working examples (involving images and tables) of 
Gnuastro's libraries can be see in @ref{Library demo programs}.
 
-But for this tutorial, let's stop discussing the libraries at this point in 
and get back to Gnuastro's already built programs which don't need any 
programming.
-But before continuing, let's clean up the files we don't need any more:
+But for this tutorial, let's stop discussing the libraries at this point in 
and get back to Gnuastro's already built programs which do not need any 
programming.
+But before continuing, let's clean up the files we do not need any more:
 
 @example
 $ rm myprogram* z-vs-tandist*
@@ -2443,7 +2443,7 @@ $ rm myprogram* z-vs-tandist*
 @node Option management and configuration files, Warping to a new pixel grid, 
Building custom programs with the library, General program usage tutorial
 @subsection Option management and configuration files
 In the previous section (@ref{Cosmological coverage and visualizing tables}), 
when you ran CosmicCalculator, you only specified the redshfit with 
@option{-z2} option.
-You didn't specifying the cosmological parameters that are necessary for the 
calculations!
+You did not specifying the cosmological parameters that are necessary for the 
calculations!
 Parameters like the Hubble constant (@mymath{H_0}), the matter density, etc.
 In spite of this, CosmicCalculator done its processing and printed results.
 
@@ -2451,9 +2451,9 @@ None of Gnuastro's programs keep a default value 
internally within their code (t
 So where did the necessary cosmological parameters that are necessary for its 
calculations come from?
 What were the values to those parameters?
 In short, they come from a configuration file (see @ref{Configuration file 
precedence}), and the final used values can be checked/edited on the 
command-line.
-In this section we'll review this important aspect of all the programs in 
Gnuastro.
+In this section we will review this important aspect of all the programs in 
Gnuastro.
 
-Configuration files are an important part of all Gnuastro's programs, 
especially the ones with a large number of options, so its important to 
understand this part well.
+Configuration files are an important part of all Gnuastro's programs, 
especially the ones with a large number of options, so it is important to 
understand this part well.
 Once you get comfortable with configuration files, you can make good use of 
them in all Gnuastro programs (for example, NoiseChisel).
 For example, to do optimal detection on various datasets, you can have 
configuration files for different noise properties.
 The configuration of each program (besides its version) is vital for the 
reproducibility of your results, so it is important to manage them properly.
@@ -2512,7 +2512,7 @@ $ astcosmiccal --H0 70 -z2 --arcsectandist
 @end example
 
 @noindent
-When an option doesn't need a value, and has a short format (like 
@option{--arcsectandist}), you can easily append it @emph{before} other short 
options.
+When an option does not need a value, and has a short format (like 
@option{--arcsectandist}), you can easily append it @emph{before} other short 
options.
 So the last command above can also be written as:
 
 @example
@@ -2530,7 +2530,7 @@ But having to type these extra options every time you run 
CosmicCalculator will
 Therefore in Gnuastro, you can put all the options and their values in a 
``Configuration file'' and tell the programs to read the option values from 
there.
 
 Let's create a configuration file...
-With your favorite text editor, make a file named @file{my-cosmology.conf} (or 
@file{my-cosmology.txt}, the suffix doesn't matter for Gnuastro, but a more 
descriptive suffix like @file{.conf} is recommended for humans reading your 
code and seeing your files: this includes you, looking into your own project, 
in a couple of months that you have forgot the details!).
+With your favorite text editor, make a file named @file{my-cosmology.conf} (or 
@file{my-cosmology.txt}, the suffix does not matter for Gnuastro, but a more 
descriptive suffix like @file{.conf} is recommended for humans reading your 
code and seeing your files: this includes you, looking into your own project, 
in a couple of months that you have forgot the details!).
 Then put the following lines inside of the plain-text file.
 One space between the option value and name is enough, the values are just 
under each other to help in readability.
 Also note that you should only use @emph{long option names} in configuration 
files.
@@ -2549,7 +2549,7 @@ Do you see how the output of the following command 
corresponds to the option val
 $ astcosmiccal --config=my-cosmology.conf -z2
 @end example
 
-But still, having to type @option{--config=my-cosmology.conf} every time is 
annoying, isn't it?
+But still, having to type @option{--config=my-cosmology.conf} every time is 
annoying, is not it?
 If you need this cosmology every time you are working in a specific directory, 
you can use Gnuastro's default configuration file names and avoid having to 
type it manually.
 
 The default configuration files (that are checked if they exist) must be 
placed in the hidden @file{.gnuastro} sub-directory (in the same directory you 
are running the program).
@@ -2557,8 +2557,8 @@ Their file name (within @file{.gnuastro}) must also be 
the same as the program's
 So in the case of CosmicCalculator, the default configuration file in a given 
directory is @file{.gnuastro/astcosmiccal.conf}.
 
 Let's do this.
-We'll first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
-Finally, we'll copy the custom configuration file there:
+We Will first make a directory for our custom cosmology, then build a 
@file{.gnuastro} within it.
+Finally, we will copy the custom configuration file there:
 
 @example
 $ mkdir my-cosmology
@@ -2594,8 +2594,8 @@ Only the directory and filename differ from the above, 
the rest of the process i
 Finally, there are also system-wide configuration files that can be used to 
define the option values for all users on a system.
 See @ref{Configuration file precedence} for a more detailed discussion.
 
-We'll stop the discussion on configuration files here, but you can always read 
about them in @ref{Configuration files}.
-Before continuing the tutorial, let's delete the two extra directories that we 
don't need any more:
+We Will stop the discussion on configuration files here, but you can always 
read about them in @ref{Configuration files}.
+Before continuing the tutorial, let's delete the two extra directories that we 
do not need any more:
 
 @example
 $ rm -rf my-cosmology*
@@ -2607,7 +2607,7 @@ $ rm -rf my-cosmology*
 We are now ready to start processing the downloaded images.
 The XDF datasets we are using here are already aligned to the same pixel grid.
 However, warping to a different/matched pixel grid is commonly needed before 
higher-level analysis when you are using datasets from different instruments.
-So let's have a look at Gnuastro's features warping features here.
+So let's have a look at Gnuastro's warping features here.
 
 Gnuastro's Warp program should be used for warping the pixel-grid (see 
@ref{Warp}).
 For example, try rotating one of the images by 20 degrees:
@@ -2626,7 +2626,7 @@ $ astwarp xdf-f160w_rotated.fits --align
 @end example
 
 Warp can generally be used for many kinds of pixel grid manipulation 
(warping), not just rotations.
-For example the outputs of the commands below will respectively have larger 
pixels (new resolution being one quarter the original resolution), get shifted 
by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
+For example the outputs of the commands below will have larger pixels 
respectively (new resolution being one quarter the original resolution), get 
shifted by 2.8 (by sub-pixel), get a shear of 2, and be tilted (projected).
 Run each of them and open the output file to see the effect, they will become 
handy for you in the future.
 
 @example
@@ -2645,14 +2645,14 @@ $ astwarp flat-ir/xdf-f160w.fits --rotate=20 
--scale=0.25
 @end example
 
 If you have multiple warps, do them all in one command.
-Don't warp them in separate commands because the correlated noise will become 
too strong.
+Do Not warp them in separate commands because the correlated noise will become 
too strong.
 As you see in the matrix that is printed when you run Warp, it merges all the 
warps into a single warping matrix (see @ref{Merging multiple warpings}) and 
simply applies that (mixes the pixel values) just once.
 However, if you run Warp multiple times, the pixels will be mixed multiple 
times, creating a strong artificial blur/smoothing, or stronger correlated 
noise.
 
 Recall that the merging of multiple warps is done through matrix 
multiplication, therefore order matters in the separate operations.
-At a lower level, through Warp's @option{--matrix} option, you can directly 
request your desired final warp and don't have to break it up into different 
warps like above (see @ref{Invoking astwarp}).
+At a lower level, through Warp's @option{--matrix} option, you can directly 
request your desired final warp and do not have to break it up into different 
warps like above (see @ref{Invoking astwarp}).
 
-Fortunately these datasets are already aligned to the same pixel grid, so you 
don't actually need the files that were just generated.You can safely delete 
them all with the following command.
+Fortunately these datasets are already aligned to the same pixel grid, so you 
do not actually need the files that were just generated.You can safely delete 
them all with the following command.
 Here, you see why we put the processed outputs that we need later into a 
separate directory.
 In this way, the top directory can be used for temporary files for testing 
that you can simply delete with a generic command like below.
 
@@ -2691,7 +2691,7 @@ Therefore, Let's first take a closer look at the 
@code{NOISECHISEL-CONFIG} exten
 If you specify a special header in the FITS file, Gnuastro's Fits program will 
print the header keywords (metadata) of that extension.
 You can either specify the HDU/extension counter (starting from 0), or name.
 Therefore, the two commands below are identical for this file.
-We are usually tempted to use the first (shorter format), but when putting 
your commands into a script, please use the second format which is more 
human-friendly and understandable for readers of your code who may not know 
what is in the 0-th extension (this includes your self in a few months!):
+We are usually tempted to use the first (shorter format), but when putting 
your commands into a script, please use the second format which is more 
human-friendly and understandable for readers of your code who may not know 
what is in the 0-th extension (this includes yourself in a few months!):
 
 @example
 $ astfits xdf-f160w_detected.fits -h0
@@ -2727,7 +2727,7 @@ In the output of the above command, you see 
@code{HIERARCH} at the start of the
 According to the FITS standard, @code{HIERARCH} is placed at the start of all 
keywords that have a name that is more than 8 characters long.
 Both the all-caps and the @code{HIERARCH} keyword can be annoying when you 
want to read/check the value.
 Therefore, the best solution is to use the @option{--keyvalue} option of 
Gnuastro's @command{astfits} program as shown below.
-With it, you don't have to worry about @code{HIERARCH} or the case of the name 
(FITS keyword names are not case-sensitive).
+With it, you do not have to worry about @code{HIERARCH} or the case of the 
name (FITS keyword names are not case-sensitive).
 
 @example
 $ astfits xdf-f160w_detected.fits -h0 --keyvalue=detgrowquant -q
@@ -2748,7 +2748,7 @@ A ``cube'' window opens along with DS9's main window.
 The buttons and horizontal scroll bar in this small new window can be used to 
navigate between the extensions.
 In this mode, all DS9's settings (for example zoom or color-bar) will be 
identical between the extensions.
 Try zooming into one part and flipping through the extensions to see how the 
galaxies were detected along with the Sky and Sky standard deviation values for 
that region.
-Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We'll do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
+Just have in mind that NoiseChisel's job is @emph{only} detection (separating 
signal from noise), We Will do segmentation on this result later to find the 
individual galaxies/peaks over the detected pixels.
 
 The second extension of NoiseChisel's output (numbered 1, named 
@code{INPUT-NO-SKY}) is the Sky-subtracted input that you provided.
 The third (@code{DETECTIONS}) is NoiseChisel's main output which is a binary 
image with only two possible values for all pixels: 0 for noise and 1 for 
signal.
@@ -2779,7 +2779,7 @@ In it, all the pixels in the @code{INPUT-NO-SKY} 
extension that are flagged 1 in
 
 Since the various extensions are in the same file, for each dataset we need 
the file and extension name.
 To make the command easier to read/write/understand, let's use shell 
variables: `@code{in}' will be used for the Sky-subtracted input image and 
`@code{det}' will be used for the detection map.
-Recall that a shell variable's value can be retrieved by adding a @code{$} 
before its name, also note that the double quotations are necessary when we 
have white-space characters in a variable name (like this case).
+Recall that a shell variable's value can be retrieved by adding a @code{$} 
before its name, also note that the double quotations are necessary when we 
have white-space characters in a variable value (like this case).
 
 @example
 $ in="xdf-f160w_detected.fits -hINPUT-NO-SKY"
@@ -2816,9 +2816,9 @@ In common surveys the number of exposures is usually 10 
or less.
 See Figure 2 of @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]} and 
the discussion on @option{--detgrowquant} there for more on how NoiseChisel 
``grow''s the detected objects and the patterns caused by correlated noise.
 
 Let's tweak NoiseChisel's configuration a little to get a better result on 
this dataset.
-Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+Do Not forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
 A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
-So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset (from a new instrument) in front of you.
+So do not hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset (from a new instrument) in front of you.
 Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
 Once you have found the good configuration for that particular noise pattern 
(instrument) you can safely use it for all new data that have a similar noise 
pattern.
 
@@ -2837,7 +2837,7 @@ $ astnoisechisel flat-ir/xdf-f160w.fits --checkdetection
 @end example
 
 The check images/tables are also multi-extension FITS files.
-As you saw from the command above, when check datasets are requested, 
NoiseChisel won't go to the end.
+As you saw from the command above, when check datasets are requested, 
NoiseChisel will not go to the end.
 It will abort as soon as all the extensions of the check image are ready.
 Please list the extensions of the output with @command{astfits} and then 
opening it with @command{ds9} as we done above.
 If you have read the paper, you will see why there are so many extensions in 
the check image.
@@ -2868,7 +2868,7 @@ $ astmkprof --kernel=gaussian,1.5,5 --oversample=1
 
 @noindent
 Please open the output @file{kernel.fits} and have a look (it is very small 
and sharp).
-We can now tell NoiseChisel to use this instead of the default kernel with the 
following command (we'll keep the @option{--checkdetection} to continue 
checking the detection steps)
+We can now tell NoiseChisel to use this instead of the default kernel with the 
following command (we will keep the @option{--checkdetection} to continue 
checking the detection steps)
 
 @example
 $ astnoisechisel flat-ir/xdf-f160w.fits --kernel=kernel.fits  \
@@ -2882,7 +2882,7 @@ This shows we are making progress :-).
 Looking at the new @code{OPENED_AND_LABELED} extension, we see that the thin 
connections between smaller peaks has now significantly decreased.
 Going two extensions/steps ahead (in the first @code{HOLES-FILLED}), you can 
see that during the process of finding false pseudo-detections, too many holes 
have been filled: do you see how the many of the brighter galaxies are 
connected? At this stage all holes are filled, irrespective of their size.
 
-Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can 
see that there aren't too many pseudo-detections because of all those extended 
filled holes.
+Try looking two extensions ahead (in the first @code{PSEUDOS-FOR-SN}), you can 
see that there are not too many pseudo-detections because of all those extended 
filled holes.
 If you look closely, you can see the number of pseudo-detections in the 
printed outputs of NoiseChisel (around 6400).
 This is another side-effect of correlated noise.
 To address it, we should slightly increase the pseudo-detection threshold 
(before changing @option{--dthresh}, run with @option{-P} to see the default 
value):
@@ -2897,7 +2897,7 @@ Open the check image now and have a look, you can see how 
the pseudo-detections
 
 @cartouche
 @noindent
-@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on 
datasets with a new noise-pattern (for example going to a Radio astronomy 
image, or a shallow ground-based image), play with @code{--dthresh} until you 
get a maximal number of pseudo-detections: the total number of 
pseudo-detections is printed on the command-line when you run NoiseChisel, you 
don't even need to open a FITS viewer.
+@strong{Maximize the number of pseudo-detections:} When using NoiseChisel on 
datasets with a new noise-pattern (for example going to a Radio astronomy 
image, or a shallow ground-based image), play with @code{--dthresh} until you 
get a maximal number of pseudo-detections: the total number of 
pseudo-detections is printed on the command-line when you run NoiseChisel, you 
do not even need to open a FITS viewer.
 
 In this particular case, try @option{--dthresh=0.2} and you will see that the 
total printed number decreases to around 6700 (recall that with 
@option{--dthresh=0.1}, it was roughly 7100).
 So for this type of very deep HST images, we should set @option{--dthresh=0.1}.
@@ -2913,7 +2913,7 @@ $ astnoisechisel flat-ir/xdf-f160w.fits 
--kernel=kernel.fits  \
 @end example
 
 The output (@file{xdf-f160w_detsn.fits}) contains two extensions for the 
pseudo-detections containing two-column tables over the undetected 
(@code{SKY_PSEUDODET_SN}) regions and those over detections 
(@code{DET_PSEUDODET_SN}).
-With the first command below you can see the HDUs of this file, and with the 
second you can see the information of the table in the first HDU (which is the 
default when you don't use @option{--hdu}):
+With the first command below you can see the HDUs of this file, and with the 
second you can see the information of the table in the first HDU (which is the 
default when you do not use @option{--hdu}):
 
 @example
 $ astfits xdf-f160w_detsn.fits
@@ -2970,8 +2970,8 @@ $ astarithmetic $in $det nan where --output=mask-det.fits
 Overall it seems good, but if you play a little with the color-bar and look 
closer in the noise, you'll see a few very sharp, but faint, objects that have 
not been detected.
 For example the object around pixel (456, 1662).
 Despite its high valued pixels, this object was lost because erosion ignores 
the precise pixel values.
-Loosing small/sharp objects like this only happens for under-sampled datasets 
like HST (where the pixel size is larger than the point spread function FWHM).
-So this won't happen on ground-based images.
+Losing small/sharp objects like this only happens for under-sampled datasets 
like HST (where the pixel size is larger than the point spread function FWHM).
+So this will not happen on ground-based images.
 
 To address this problem of sharp objects, we can use NoiseChisel's 
@option{--noerodequant} option.
 All pixels above this quantile will not be eroded, thus allowing us to 
preserve small/sharp objects (that cover a small area, but have a lot of signal 
in it).
@@ -2983,15 +2983,15 @@ $ astnoisechisel flat-ir/xdf-f160w.fits 
--kernel=kernel.fits     \
 @end example
 
 This seems to be fine and the object above is now detected.
-We'll stop the configuration here, but please feel free to keep looking into 
the data to see if you can improve it even more.
+We Will stop the configuration of NoiseChisel here, but please feel free to 
keep looking into the data to see if you can improve it even more.
 
-Once you have found the proper customization for the type of images you will 
be using you don't need to change them any more.
+Once you have found the proper customization for the type of images you will 
be using you do not need to change them any more.
 The same configuration can be used for any dataset that has been similarly 
produced (and has a similar noise pattern).
 But entering all these options on every call to NoiseChisel is annoying and 
prone to bugs (mistakenly typing the wrong value for example).
-To simply things, we'll make a configuration file in a visible @file{config} 
directory.
-Then we'll define the hidden @file{.gnuastro} directory (that all Gnuastro's 
programs will look into for configuration files) as a symbolic link to the 
@file{config} directory.
-Finally, we'll write the finalized values of the options into NoiseChisel's 
standard configuration file within that directory.
-We'll also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
+To simplify things, we will make a configuration file in a visible 
@file{config} directory.
+Then we will define the hidden @file{.gnuastro} directory (that all Gnuastro's 
programs will look into for configuration files) as a symbolic link to the 
@file{config} directory.
+Finally, we will write the finalized values of the options into NoiseChisel's 
standard configuration file within that directory.
+We Will also put the kernel in a separate directory to keep the top directory 
clean of any files we later need.
 
 @example
 $ mkdir kernel config
@@ -3004,7 +3004,7 @@ $ echo "snquant      0.95"             >> 
config/astnoisechisel.conf
 @end example
 
 @noindent
-We are now ready to finally run NoiseChisel on the three filters and keep the 
output in a dedicated directory (which we'll call @file{nc} for simplicity).
+We are now ready to finally run NoiseChisel on the three filters and keep the 
output in a dedicated directory (which we will call @file{nc} for simplicity).
 @example
 $ rm *.fits
 $ mkdir nc
@@ -3021,16 +3021,16 @@ As we showed before (in @ref{NoiseChisel and 
Multiextension FITS files}), NoiseC
 As the input datasets get larger this output can become hard to manage and 
waste a lot of storage space.
 Fortunately there is a solution to this problem (which is also useful for 
Segment's outputs).
 
-In this small section we'll take a short detour to show this feature.
+In this small section we will take a short detour to show this feature.
 Please note that the outputs generated here are not needed for the rest of the 
tutorial.
-But first, let's have a look at the contents/HDUs and volume of NoiseChisel's 
output from @ref{NoiseChisel optimization for detection} (fast answer, its 
larger than 100 mega-bytes):
+But first, let's have a look at the contents/HDUs and volume of NoiseChisel's 
output from @ref{NoiseChisel optimization for detection} (fast answer, it is 
larger than 100 mega-bytes):
 
 @example
 $ astfits nc/xdf-f160w.fits
 $ ls -lh nc/xdf-f160w.fits
 @end example
 
-Two options can drastically decrease NoiseChisel's output file size: 1) With 
the @option{--rawoutput} option, NoiseChisel won't create a Sky-subtracted 
input.
+Two options can drastically decrease NoiseChisel's output file size: 1) With 
the @option{--rawoutput} option, NoiseChisel will not create a Sky-subtracted 
output.
 After all, it is redundant: you can always generate it by subtracting the 
@code{SKY} extension from the input image (which you have in your database) 
using the Arithmetic program.
 2) With the @option{--oneelempertile}, you can tell NoiseChisel to store its 
Sky and Sky standard deviation results with one pixel per tile (instead of many 
pixels per tile).
 So let's run NoiseChisel with these options, then have another look at the 
HDUs and the over-all file size:
@@ -3045,10 +3045,10 @@ $ ls -lh nc-for-storage.fits
 @noindent
 See how @file{nc-for-storage.fits} has four HDUs, while 
@file{nc/xdf-f160w.fits} had five HDUs?
 As explained above, the missing extension is @code{INPUT-NO-SKY}.
-Also, look at the sizes of the @code{SKY} and @code{SKY_STD} HDUs, unlike 
before, they aren't the same size as @code{DETECTIONS}, they only have one 
pixel for each tile (group of pixels in raw input).
+Also, look at the sizes of the @code{SKY} and @code{SKY_STD} HDUs, unlike 
before, they are not the same size as @code{DETECTIONS}, they only have one 
pixel for each tile (group of pixels in raw input).
 Finally, you see that @file{nc-for-storage.fits} is just under 8 mega byes 
(while @file{nc/xdf-f160w.fits} was 100 mega bytes)!
 
-But were are not finished!
+But we are are not yet finished!
 You can even be more efficient in storage, archival or transferring 
NoiseChisel's output by compressing this file.
 Try the command below to see how NoiseChisel's output has now shrunk to about 
250 kilo-byes while keeping all the necessary information as the original 100 
mega-byte output.
 
@@ -3076,9 +3076,9 @@ rm nc-for-storage.fits.gz
 @node Segmentation and making a catalog, Measuring the dataset limits, 
NoiseChisel optimization for storage, General program usage tutorial
 @subsection Segmentation and making a catalog
 The main output of NoiseChisel is the binary detection map (@code{DETECTIONS} 
extension, see @ref{NoiseChisel optimization for detection}).
-which only has two values of 1 or 0.
+It only has two values: 1 or 0.
 This is useful when studying the noise or background properties, but hardly of 
any use when you actually want to study the targets/galaxies in the image, 
especially in such a deep field where almost everything is connected.
-To find the galaxies over the detections, we'll use Gnuastro's @ref{Segment} 
program:
+To find the galaxies over the detections, we will use Gnuastro's @ref{Segment} 
program:
 
 @example
 $ mkdir seg
@@ -3090,7 +3090,7 @@ $ astsegment nc/xdf-f105w.fits -oseg/xdf-f105w.fits
 Segment's operation is very much like NoiseChisel (in fact, prior to version 
0.6, it was part of NoiseChisel).
 For example the output is a multi-extension FITS file (previously discussed in 
@ref{NoiseChisel and Multiextension FITS files}), it has check images and uses 
the undetected regions as a reference (previously discussed in @ref{NoiseChisel 
optimization for detection}).
 Please have a look at Segment's multi-extension output to get a good feeling 
of what it has done.
-Don't forget to flip through the extensions in the ``Cube'' window.
+Do Not forget to flip through the extensions in the ``Cube'' window.
 
 @example
 $ astscript-fits-view seg/xdf-f160w.fits
@@ -3103,7 +3103,7 @@ In the @code{OBJECTS} extension, we see that the large 
detections of NoiseChisel
 Play with the color-bar and hover your mouse of the various detections to see 
their different labels.
 
 The clumps are not affected by the hard-to-deblend and low signal-to-noise 
diffuse regions, they are more robust for calculating the colors (compared to 
objects).
-From this step onward, we'll continue with clumps.
+From this step onward, we will continue with clumps.
 
 Having localized the regions of interest in the dataset, we are ready to do 
measurements on them with @ref{MakeCatalog}.
 MakeCatalog is specialized and optimized for doing measurements over labeled 
regions of an image.
@@ -3116,7 +3116,7 @@ $ astmkcatalog --help
 @end example
 
 So let's select the properties we want to measure in this tutorial.
-First of all, we need to know which measurement belongs to which object or 
clump, so we'll start with the @option{--ids} (read as: IDs@footnote{This 
option is plural because we need two ID columns for identifying ``clumps'' in 
the clumps catalog/table: the first column will be the ID of the host 
``object'', and the second one will be the ID of the clump within that object. 
In the ``objects'' catalog/table, only a single column will be returned for 
this option.}).
+First of all, we need to know which measurement belongs to which object or 
clump, so we will start with the @option{--ids} (read as: IDs@footnote{This 
option is plural because we need two ID columns for identifying ``clumps'' in 
the clumps catalog/table: the first column will be the ID of the host 
``object'', and the second one will be the ID of the clump within that object. 
In the ``objects'' catalog/table, only a single column will be returned for 
this option.}).
 We also want to measure (in this order) the Right Ascension (with 
@option{--ra}), Declination (@option{--dec}), magnitude (@option{--magnitude}), 
and signal-to-noise ratio (@option{--sn}) of the objects and clumps.
 Furthermore, as mentioned above, we also want measurements on clumps, so we 
also need to call @option{--clumpscat}.
 The following command will make these measurements on Segment's F160W output 
and write them in a catalog for each object and clump in a FITS table.
@@ -3143,7 +3143,7 @@ $ astmkcatalog seg/xdf-f105w.fits --ids --ra --dec 
--magnitude --sn \
 However, the galaxy properties might differ between the filters (which is the 
whole purpose behind observing in different filters!).
 Also, the noise properties and depth of the datasets differ.
 You can see the effect of these factors in the resulting clump catalogs, with 
Gnuastro's Table program.
-We'll go deep into working with tables in the next section, but in summary: 
the @option{-i} option will print information about the columns and number of 
rows.
+We Will go deep into working with tables in the next section, but in summary: 
the @option{-i} option will print information about the columns and number of 
rows.
 To see the column values, just remove the @option{-i} option.
 In the output of each command below, look at the @code{Number of rows:}, and 
note that they are different.
 
@@ -3155,11 +3155,11 @@ $ asttable cat/xdf-f160w.fits -hCLUMPS -i
 
 Matching the catalogs is possible (for example with @ref{Match}).
 However, the measurements of each column are also done on different pixels: 
the clump labels can/will differ from one filter to another for one object.
-Please open them and focus on one object to see for your self.
+Please open them and focus on one object to see for yourself.
 This can bias the result, if you match catalogs.
 
 An accurate color calculation can only be done when magnitudes are measured 
from the same pixels on all images and this can be done easily with MakeCatalog.
-In fact this is one of the reasons that NoiseChisel or Segment don't generate 
a catalog like most other detection/segmentation software.
+In fact this is one of the reasons that NoiseChisel or Segment do not generate 
a catalog like most other detection/segmentation software.
 This gives you the freedom of selecting the pixels for measurement in any way 
you like (from other filters, other software, manually, and etc).
 Fortunately in these images, the Point spread function (PSF) is very similar, 
allowing us to use a single labeled image output for all filters@footnote{When 
the PSFs between two images differ largely, you would have to PSF-match the 
images before using the same pixels for measurements.}.
 
@@ -3262,14 +3262,14 @@ Generally, they look very small with different levels 
of diffuse-ness!
 Those that are sharper make more visual sense (to be @mymath{5\sigma} 
detections), but the more diffuse ones extend over a larger area.
 Furthermore, the noise is measured on individual pixel measurements.
 However, during the reduction many exposures are co-added and stacked, mixing 
the pixels like a small convolution (creating ``correlated noise'').
-Therefore you clearly see two main issues with the detection limit as defined 
above: it depends on the morphology, and it doesn't take into account the 
correlated noise.
+Therefore you clearly see two main issues with the detection limit as defined 
above: it depends on the morphology, and it does not take into account the 
correlated noise.
 
 @cindex Upper-limit
 A more realistic way to estimate the significance of the detection is to take 
its footprint, randomly place it in thousands of undetected regions of the 
image and use that distribution as a reference.
 This is technically known as upper-limit measurements.
 For a full discussion, see @ref{Upper limit magnitude of each detection}).
 
-Since its for each separate object, the upper-limit measurements should be 
requested as extra columns in MakeCatalog's output.
+Since it is for each separate object, the upper-limit measurements should be 
requested as extra columns in MakeCatalog's output.
 For example with the command below, let's generate a new catalog of the F160W 
filter, but with two extra columns compared to the one in @file{cat/}: the 
upper-limit magnitude and the upper-limit multiple of sigma.
 
 @example
@@ -3315,7 +3315,7 @@ $ asttable xdf-f160w.fits -hclumps -cra,dec 
--noblankend=3 \
 @end example
 
 You see that they are all extremely faint and diffuse/small peaks.
-Therefore, if an object's magnitude is fainter than its upper-limit magnitude, 
you shouldn't use the magnitude: it is not accurate!
+Therefore, if an object's magnitude is fainter than its upper-limit magnitude, 
you should not use the magnitude: it is not accurate!
 You should use the upper-limit magnitude instead (with an arrow in your plots 
to mark which ones are upper-limits).
 
 But the main point (in relation to the magnitude limit) with the upper-limit, 
is the @code{UPPERLIMIT_SIGMA} column.
@@ -3344,9 +3344,9 @@ For example in the Legacy 
Survey@footnote{@url{https://www.legacysurvey.org/dr9/
 
 As mentioned above, an important caveat in this simple calculation is that we 
should only be looking at point-like objects, not simply everything.
 This is because the shape or radial slope of the profile has an important 
effect on this measurement: at the same total magnitude, a sharper object will 
have a higher S/N.
-To be more precise, we should first perform star-galaxy separation, then do 
this only the objects classified as stars.
+To be more precise, we should first perform star-galaxy separation, then do 
this only for the objects that are classified as stars.
 A crude, first-order, method is to use the @option{--axisratio} option so 
MakeCatalog also measures the axis ratio, then call Table with 
@option{--range=upperlimit_sigma,,4.8:5.2} and 
@option{--range=axis_ratio,0.95:1} (in one command).
-Please do this for your self as an exercise to see the difference with the 
result above.
+Please do this for yourself as an exercise to see the difference with the 
result above.
 
 @noindent
 Before continuing, let's remove this temporarily produced catalog:
@@ -3442,7 +3442,7 @@ $ astfits  cat/xdf-f160w.fits              # Extension 
information
 
 Let's inspect the table in each extension with Gnuastro's Table program (see 
@ref{Table}).
 We should have used @option{-hOBJECTS} and @option{-hCLUMPS} instead of 
@option{-h1} and @option{-h2} respectively.
-The numbers are just used here to convey that both names or numbers are 
possible, in the next commands, we'll just use names.
+The numbers are just used here to convey that both names or numbers are 
possible, in the next commands, we will just use names.
 
 @example
 $ asttable cat/xdf-f160w.fits -h1 --info   # Objects catalog info.
@@ -3470,11 +3470,11 @@ Similar to HDUs, when the columns have names, always 
use the name: it is so comm
 Using column names instead of numbers has many advantages:
 @enumerate
 @item
-You don't have to worry about the order of columns in the table.
+You do not have to worry about the order of columns in the table.
 @item
 It acts as a documentation in the script.
 @item
-Column meta-data (including a name) aren't just limited to FITS tables and can 
also be used in plain text tables, see @ref{Gnuastro text table format}.
+Column meta-data (including a name) are not just limited to FITS tables and 
can also be used in plain text tables, see @ref{Gnuastro text table format}.
 @end enumerate
 
 @noindent
@@ -3494,9 +3494,9 @@ Since @file{cat/xdf-f160w.fits} and 
@file{cat/xdf-f105w-on-f160w-lab.fits} have
 
 We can merge columns with the @option{--catcolumnfile} option like below.
 You give this option a file name (which is assumed to be a table that has the 
same number of rows as the main input), and all the table's columns will be 
concatenated/appended to the main table.
-So please try it out with the commands below.
-We'll first look at the metadata of the first table (only the @code{CLUMPS} 
extension).
-With the second command, we'll concatenate the two tables and write them in, 
@file{two-in-one.fits} and finally, we'll check the new catalog's metadata.
+Now, try it out with the commands below.
+We Will first look at the metadata of the first table (only the @code{CLUMPS} 
extension).
+With the second command, we will concatenate the two tables and write them in, 
@file{two-in-one.fits} and finally, we will check the new catalog's metadata.
 
 @example
 $ asttable cat/xdf-f160w.fits -i -hCLUMPS
@@ -3507,7 +3507,7 @@ $ asttable two-in-one.fits -i
 @end example
 
 By comparing the two metadata, we see that both tables have the same number of 
rows.
-But what might have attracted your attention more, is that 
@file{two-in-one.fits} has double the number of columns (as expected, after 
all, you merged both tables into one file, and didn't ask for any specific 
column).
+But what might have attracted your attention more, is that 
@file{two-in-one.fits} has double the number of columns (as expected, after 
all, you merged both tables into one file, and did not ask for any specific 
column).
 In fact you can concatenate any number of other tables in one command, for 
example:
 
 @example
@@ -3547,10 +3547,10 @@ $ asttable cat/xdf-f160w.fits -hCLUMPS 
--output=three-in-one-2.fits \
 $ asttable three-in-one-2.fits -i
 @end example
 
-But we aren't finished yet!
-There is a very big problem: its not immediately clear which one of 
@code{MAGNITUDE}, @code{MAGNITUDE-1} or @code{MAGNITUDE-2} columns belong to 
which filter!
+But we are not finished yet!
+There is a very big problem: it is not immediately clear which one of 
@code{MAGNITUDE}, @code{MAGNITUDE-1} or @code{MAGNITUDE-2} columns belong to 
which filter!
 Right now, you know this because you just ran this command.
-But in one hour, you'll start doubting your self and will be forced to go 
through your command history, trying to figure out if you added F105W first, or 
F125W.
+But in one hour, you'll start doubting yourself and will be forced to go 
through your command history, trying to figure out if you added F105W first, or 
F125W.
 You should never torture your future-self (or your colleagues) like this!
 So, let's rename these confusing columns in the matched catalog.
 
@@ -3577,7 +3577,7 @@ $ asttable three-in-one-3.fits -i
 
 We now have all three magnitudes in one table and can start doing arithmetic 
on them (to estimate colors, which are just a subtraction of magnitudes).
 To use column arithmetic, simply call the column selection option 
(@option{--column} or @option{-c}), put the value in single quotations and 
start the value with @code{arith} (followed by a space) like the example below.
-Column arithmetic uses the same ``reverse polish notation'' as the Arithmetic 
program (see @ref{Reverse polish notation}), with almost all the same operators 
(see @ref{Arithmetic operators}), and some column-specific operators (that 
aren't available for images).
+Column arithmetic uses the same ``reverse polish notation'' as the Arithmetic 
program (see @ref{Reverse polish notation}), with almost all the same operators 
(see @ref{Arithmetic operators}), and some column-specific operators (that are 
not available for images).
 In column-arithmetic, you can identify columns by number (prefixed with a 
@code{$}) or name, for more see @ref{Column arithmetic}.
 
 So let's estimate one color from @file{three-in-one-3.fits} using column 
arithmetic.
@@ -3595,11 +3595,11 @@ $ asttable three-in-one-3.fits -ocolor-cat.fits -c1,2 \
            -cRA,DEC --column='arith MAG-F105W MAG-F160W -'
 @end example
 
-This example again highlights the important point on using column names: if 
you don't know the commands before, you have no way of making sense of the 
first command: what is in column 5 and 7? why not subtract columns 3 and 4 from 
each other?
+This example again highlights the important point on using column names: if 
you do not know the commands before, you have no way of making sense of the 
first command: what is in column 5 and 7? why not subtract columns 3 and 4 from 
each other?
 Do you see how cryptic the first one is?
 Then look at the last one: even if you have no idea how this table was 
created, you immediately understand the desired operation.
 @strong{When you have column names, please use them.}
-If your table doesn't have column names, give them names with the 
@option{--colmetadata} (described above) as you are creating them.
+If your table does not have column names, give them names with the 
@option{--colmetadata} (described above) as you are creating them.
 But how about the metadata for the column you just created with column 
arithmetic?
 Have a look at the column metadata of the table produced above:
 
@@ -3610,7 +3610,7 @@ $ asttable color-cat.fits -i
 The name of the column produced by arithmetic column is @command{ARITH_1}!
 This is natural: Arithmetic has no idea what the modified column is!
 You could have multiplied two columns, or done much more complex 
transformations with many columns.
-@emph{Metadata can't be set automatically, your (the human) input is 
necessary.}
+@emph{Metadata cannot be set automatically, your (the human) input is 
necessary.}
 To add metadata, you can use @option{--colmetadata} like before:
 
 @example
@@ -3625,7 +3625,7 @@ We want it to have the magnitudes in all three filters, 
as well as the three pos
 Recall that by convention in astronomy colors are defined by subtracting the 
bluer magnitude from the redder magnitude.
 In this way a larger color value corresponds to a redder object.
 So from the three magnitudes, we can produce three colors (as shown below).
-Also, because this is the final table we are creating here and want to use it 
later, we'll store it in @file{cat/} and we'll also give it a clear name and 
use the @option{--range} option to only print columns with a signal-to-noise 
ratio (@code{SN} column, from the F160W filter) above 5.
+Also, because this is the final table we are creating here and want to use it 
later, we will store it in @file{cat/} and we will also give it a clear name 
and use the @option{--range} option to only print columns with a 
signal-to-noise ratio (@code{SN} column, from the F160W filter) above 5.
 
 @example
 $ asttable three-in-one-3.fits --range=SN,5,inf -c1,2,RA,DEC,SN \
@@ -3648,7 +3648,7 @@ Above, updating/changing column metadata was done with 
the @option{--colmetadata
 But in many situations, the table is already made and you just want to update 
the metadata of one column.
 In such cases using @option{--colmetadata} is over-kill (wasting CPU/RAM 
energy or time if the table is large) because it will load the full table data 
and metadata into memory, just change the metadata and write it back into a 
file.
 
-In scenarios when the table's data doesn't need to be changed and you just 
want to set or update the metadata, it is much more efficient to use basic FITS 
keyword editing.
+In scenarios when the table's data does not need to be changed and you just 
want to set or update the metadata, it is much more efficient to use basic FITS 
keyword editing.
 For example, in the FITS standard, column names are stored in the @code{TTYPE} 
header keywords, so let's have a look:
 
 @example
@@ -3657,8 +3657,8 @@ $ astfits two-in-one.fits -h1 | grep TTYPE
 @end example
 
 Changing/updating the column names is as easy as updating the values to these 
keywords.
-You don't need to touch the actual data!
-With the command below, we'll just update the @code{MAGNITUDE} and 
@code{MAGNITUDE-1} columns (which are respectively stored in the @code{TTYPE5} 
and @code{TTYPE11} keywords) by modifying the keyword values and checking the 
effect by listing the column metadata again:
+You do not need to touch the actual data!
+With the command below, we will just update the @code{MAGNITUDE} and 
@code{MAGNITUDE-1} columns (which are respectively stored in the @code{TTYPE5} 
and @code{TTYPE11} keywords) by modifying the keyword values and checking the 
effect by listing the column metadata again:
 
 @example
 $ astfits two-in-one.fits -h1 \
@@ -3671,7 +3671,7 @@ You can see that the column names have indeed been 
changed without touching any
 You can do the same for the column units or comments by modifying the keywords 
starting with @code{TUNIT} or @code{TCOMM}.
 
 Generally, Gnuastro's table is a very useful program in data analysis and what 
you have seen so far is just the tip of the iceberg.
-But to avoid making the tutorial even longer, we'll stop reviewing the 
features here, for more, please see @ref{Table}.
+But to avoid making the tutorial even longer, we will stop reviewing the 
features here, for more, please see @ref{Table}.
 Before continuing, let's just delete all the temporary FITS tables we placed 
in the top project directory:
 
 @example
@@ -3713,14 +3713,14 @@ For example, let's create the color-magnitude diagram 
for our measured targets.
 @cindex Histogram, 2D
 In many papers, the color-magnitude diagram is usually plotted as a scatter 
plot.
 However, scatter plots have a major limitation when there are a lot of points 
and they cluster together in one region of the plot: the possible correlation 
in that dense region is lost (because the points fall over each other).
-In such cases, its much better to use a 2D histogram.
+In such cases, it is much better to use a 2D histogram.
 In a 2D histogram, the full range in both columns is divided into discrete 2D 
bins (or pixels!) and we count how many objects fall in that 2D bin.
 
 Since a 2D histogram is a pixelated space, we can simply save it as a FITS 
image and view it in a FITS viewer.
 Let's do this in the command below.
-As is common with color-magnitude plots, we'll put the redder magnitude on the 
horizontal axis and the color on the vertical axis.
-We'll set both dimensions to have 100 bins (with @option{--numbins} for the 
horizontal and @option{--numbins2} for the vertical).
-Also, to avoid strong outliers in any of the dimensions, we'll manually set 
the range of each dimension with the @option{--greaterequal}, 
@option{--greaterequal2}, @option{--lessthan} and @option{--lessthan2} options.
+As is common with color-magnitude plots, we will put the redder magnitude on 
the horizontal axis and the color on the vertical axis.
+We Will set both dimensions to have 100 bins (with @option{--numbins} for the 
horizontal and @option{--numbins2} for the vertical).
+Also, to avoid strong outliers in any of the dimensions, we will manually set 
the range of each dimension with the @option{--greaterequal}, 
@option{--greaterequal2}, @option{--lessthan} and @option{--lessthan2} options.
 
 @example
 $ aststatistics cat/mags-with-color.fits -cMAG-F160W,F105W-F160W \
@@ -3763,9 +3763,9 @@ $ astscript-fits-view cmd.fits --ds9scale=minmax \
 @cindex PGFPlots (@LaTeX{} package)
 This is good for a fast progress update.
 But for your paper or more official report, you want to show something with 
higher quality.
-For that, you can use the PGFPlots package in @LaTeX{} to add axises in the 
same font as your text, sharp grids and many other elegant/powerful features 
(like over-plotting interesting points, lines and etc).
+For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same 
font as your text, sharp grids and many other elegant/powerful features (like 
over-plotting interesting points, lines and etc).
 But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example PDF.
-We'll use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
+We Will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
 
 @example
 $ astconvertt cmd.fits --colormap=sls-inverse --borderwidth=0 -ocmd.pdf
@@ -3785,7 +3785,7 @@ $ cat report-cmd/report.tex
 \dimendef\prevdepth=0
 \begin@{document@}
 
-You can write all you want here...\par
+You can write all you want here...
 
 \begin@{tikzpicture@}
   \begin@{axis@}[
@@ -3815,7 +3815,7 @@ Open the newly created @file{report.pdf} and enjoy the 
exquisite quality.
 The improved quality, blending in with the text, vector-graphics resolution 
and other features make this plot pleasing to the eye, and let your readers 
focus on the main point of your scientific argument.
 PGFPlots can also built the PDF of the plot separately from the rest of the 
paper/report, see @ref{2D histogram as a table for plotting} for the necessary 
changes in the preamble.
 
-We won't go much deeper into the Statistics program here, but there is so much 
more you can do with it.
+We will not go much deeper into the Statistics program here, but there is so 
much more you can do with it.
 After finishing the tutorial, see @ref{Statistics}.
 
 
@@ -3831,9 +3831,9 @@ To do this, we can ignore the labeled images of 
NoiseChisel of Segment, we can j
 That labeled image can then be given to MakeCatalog
 
 @cindex GNU AWK
-To generate the apertures catalog we'll use Gnuastro's MakeProfiles (see 
@ref{MakeProfiles}).
+To generate the apertures catalog we will use Gnuastro's MakeProfiles (see 
@ref{MakeProfiles}).
 But first we need a list of positions (aperture photometry needs a-priori 
knowledge of your target positions).
-So we'll first read the clump positions from the F160W catalog, then use AWK 
to set the other parameters of each profile to be a fixed circle of radius 5 
pixels (recall that we want all apertures to have an identical size/area in 
this scenario).
+So we will first read the clump positions from the F160W catalog, then use AWK 
to set the other parameters of each profile to be a fixed circle of radius 5 
pixels (recall that we want all apertures to have an identical size/area in 
this scenario).
 
 @example
 $ rm *.fits *.txt
@@ -3857,7 +3857,7 @@ $ astmkprof apertures.txt 
--background=flat-ir/xdf-f160w.fits \
 Open @file{apertures.fits} with a FITS image viewer (like SAO DS9) and look 
around at the circles placed over the targets.
 Also open the input image and Segment's clumps image and compare them with the 
positions of these circles.
 Where the apertures overlap, you will notice that one label has replaced the 
other (because of the @option{--replace} option).
-In the future, MakeCatalog will be able to work with overlapping labels, but 
currently it doesn't.
+In the future, MakeCatalog will be able to work with overlapping labels, but 
currently it does not.
 If you are interested, please join us in completing Gnuastro with added 
improvements like this (see task 14750 
@footnote{@url{https://savannah.gnu.org/task/index.php?14750}}).
 
 We can now feed the @file{apertures.fits} labeled image into MakeCatalog 
instead of Segment's output as shown below.
@@ -3881,12 +3881,12 @@ You can also change the filter name and zero point 
magnitudes and run this comma
 @subsection Matching catalogs
 
 In the example above, we had the luxury to generate the catalogs ourselves, 
and where thus able to generate them in a way that the rows match.
-But this isn't generally the case.
-In many situations, you need to use catalogs from many different telescopes, 
or catalogs with high-level calculations that you can't simply regenerate with 
the same pixels without spending a lot of time or using heavy computation.
+But this is not generally the case.
+In many situations, you need to use catalogs from many different telescopes, 
or catalogs with high-level calculations that you cannot simply regenerate with 
the same pixels without spending a lot of time or using heavy computation.
 In such cases, when each catalog has the coordinates of its own objects, you 
can use the coordinates to match the rows with Gnuastro's Match program (see 
@ref{Match}).
 
 As the name suggests, Gnuastro's Match program will match rows based on 
distance (or aperture in 2D) in one, two, or three columns.
-For this tutorial, let's try matching the two catalogs that weren't created 
from the same labeled images, recall how each has a different number of rows:
+For this tutorial, let's try matching the two catalogs that were not created 
from the same labeled images, recall how each has a different number of rows:
 
 @example
 $ asttable cat/xdf-f105w.fits -hCLUMPS -i
@@ -3909,7 +3909,7 @@ $ astfits matched.fits
 From the second command, you see that the output has two extensions and that 
both have the same number of rows.
 The rows in each extension are the matched rows of the respective input table: 
those in the first HDU come from the first input and those in the second HDU 
come from the second.
 However, their order may be different from the input tables because the rows 
match: the first row in the first HDU matches with the first row in the second 
HDU, and etc.
-You can also see which objects didn't match with the @option{--notmatched}, 
like below.
+You can also see which objects did not match with the @option{--notmatched}, 
like below.
 Note how each extension of  now has a different number of rows.
 
 @example
@@ -3944,8 +3944,8 @@ $ astfits matched.fits
 @subsection Reddest clumps, cutouts and parallelization
 @cindex GNU AWK
 As a final step, let's go back to the original clumps-based color measurement 
we generated in @ref{Working with catalogs estimating colors}.
-We'll find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we'll see how they are located on the image.
-With the command below, we'll select the reddest objects (those with a color 
larger than 1.5):
+We Will find the objects with the strongest color and make a cutout to inspect 
them visually and finally, we will see how they are located on the image.
+With the command below, we will select the reddest objects (those with a color 
larger than 1.5):
 
 @example
 $ asttable cat/mags-with-color.fits --range=F105W-F160W,1.5,inf
@@ -3959,8 +3959,8 @@ $ asttable cat/mags-with-color.fits 
--range=F105W-F160W,1.5,inf | wc -l
 @end example
 
 Let's crop the F160W image around each of these objects, but we first need a 
unique identifier for them.
-We'll define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
-Note that since we are making a plain text table, we'll define the necessary 
(for the string-type first column) metadata manually (see @ref{Gnuastro text 
table format}).
+We Will define this identifier using the object and clump labels (with an 
underscore between them) and feed the output of the command above to AWK to 
generate a catalog.
+Note that since we are making a plain text table, we will define the necessary 
(for the string-type first column) metadata manually (see @ref{Gnuastro text 
table format}).
 
 @example
 $ echo "# Column 1: ID [name, str10] Object ID" > cat/reddest.txt
@@ -3981,8 +3981,8 @@ $ astscript-ds9-region cat/reddest.txt -c2,3 --mode=wcs \
 @end example
 
 We can now feed @file{cat/reddest.txt} into Gnuastro's Crop program to get 
separate postage stamps for each object.
-To keep things clean, we'll make a directory called @file{crop-red} and ask 
Crop to save the crops in this directory.
-We'll also add a @file{-f160w.fits} suffix to the crops (to remind us which 
filter they came from).
+To keep things clean, we will make a directory called @file{crop-red} and ask 
Crop to save the crops in this directory.
+We Will also add a @file{-f160w.fits} suffix to the crops (to remind us which 
filter they came from).
 The width of the crops will be 15 arc-seconds (or 15/3600 degrees, which is 
the units of the WCS).
 
 @example
@@ -3992,7 +3992,7 @@ $ astcrop flat-ir/xdf-f160w.fits --mode=wcs --namecol=ID \
           --suffix=-f160w.fits --output=crop-red
 @end example
 
-Like the MakeProfiles command in @ref{Aperture photometry}, if you look at the 
order of the crops, you will notice that the crops aren't made in order!
+Like the MakeProfiles command in @ref{Aperture photometry}, if you look at the 
order of the crops, you will notice that the crops are not made in order!
 This is because each crop is independent of the rest, therefore crops are done 
in parallel, and parallel operations are asynchronous.
 So the order can differ in each run, but the final output is the same!
 In the command above, you can change @file{f160w} to @file{f105w} to make the 
crops in both filters.
@@ -4022,7 +4022,7 @@ But like the crops, each JPEG image is independent, so 
let's parallelize it.
 In other words, we want to write run more than one instance of the command at 
any moment.
 To do that, we will use @url{https://en.wikipedia.org/wiki/Make_(software), 
Make}.
 Make is a very wonderful pipeline management system, and the most common and 
powerful implementation is @url{https://www.gnu.org/software/make, GNU Make}, 
which has a complete manual just like this one.
-We can't go into the details of Make here, for a hands-on video tutorial, see 
this @url{https://peertube.stream/w/iJitjS3r232Z8UPMxKo6jq, video introduction}.
+We cannot go into the details of Make here, for a hands-on video tutorial, see 
this @url{https://peertube.stream/w/iJitjS3r232Z8UPMxKo6jq, video introduction}.
 To do the process above in Make, please copy the contents below into a 
plain-text file called @file{Makefile}.
 Just replace the @code{__[TAB]__} part at the start of the line with a single 
`@key{TAB}' button on your keyboard.
 
@@ -4043,8 +4043,8 @@ $ make -j12
 
 @noindent
 Did you notice how much faster this one was?
-When possible, its always very helpful to do your analysis in parallel.
-You can build very complex workflows with Make, for example see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. (2021)} so its worth 
spending some time to master.
+When possible, it is always very helpful to do your analysis in parallel.
+You can build very complex workflows with Make, for example see 
@url{https://arxiv.org/abs/2006.03018, Akhlaghi et al. (2021)} so it is worth 
spending some time to master.
 
 @node FITS images in a publication, Marking objects for publication, Reddest 
clumps cutouts and parallelization, General program usage tutorial
 @subsection FITS images in a publication
@@ -4068,7 +4068,7 @@ However, in ConvertType, there is no graphic interface, 
so it has very few depen
 So in this concluding step of the analysis, let's build a nice 
publication-ready plot, showing the positions of the reddest objects in the 
image for our paper.
 
 In @ref{Reddest clumps cutouts and parallelization}, we already used 
ConvertType to make JPEG postage stamps.
-Here, we'll use it to make a PDF image of the whole deep region.
+Here, we will use it to make a PDF image of the whole deep region.
 To start, let's simply run ConvertType on the F160W image:
 
 @example
@@ -4143,7 +4143,7 @@ These are due to the fact that the magnitude is defined 
on a logarithmic scale a
 @end itemize
 
 In other words, we should replace all NaN pixels, and pixels with a surface 
brightness value fainter than the image surface brightness limit to this limit.
-With the first command below, we'll first extract the surface brightness limit 
from the catalog headers that we calculated before, and then call Arithmetic to 
use this limit.
+With the first command below, we will first extract the surface brightness 
limit from the catalog headers that we calculated before, and then call 
Arithmetic to use this limit.
 
 @example
 $ sblimit=$(astfits cat/xdf-f160w.fits --keyvalue=SBLMAG -q)
@@ -4194,16 +4194,16 @@ $ astarithmetic xdf-f160w-warped.fits $zeropoint 
$pixarcsec2 \
 $ astconvertt xdf-f160w-sb.fits --output=xdf-f160w-sb.pdf --fluxlow=20
 @end example
 
-Above, we needed to re-calculate the pixel area of the warpped image, but we 
didn't need to re-calculate the surface brightness limit!
+Above, we needed to re-calculate the pixel area of the warpped image, but we 
did not need to re-calculate the surface brightness limit!
 The reason is that the surface brightness limit is independent of the pixel 
area (in its derivation, the pixel area has been accounted for).
 As a side-effect of the warping, the number of pixels in the image also 
dramatically decreased, therefore the volumn of the output PDF (in bytes) is 
also smaller, making your paper/report easier to upload/download or send by 
email.
 This visual resolution is still more than enough for including on top of a 
column in your paper!
 
 @cartouche
 @noindent
-@strong{I don't have the zeropoint of my image:} The absolute value of the 
zeropoint is irrelevant for the finally produced PDF.
+@strong{I do not have the zeropoint of my image:} The absolute value of the 
zeropoint is irrelevant for the finally produced PDF.
 We used it here because it was available and makes the numbers physically 
understandable.
-If you don't have the zeropoint, just set it to zero (which is also the 
default zeropoint used by MakeCatalog when it estimates the surface brightness 
limit).
+If you do not have the zeropoint, just set it to zero (which is also the 
default zeropoint used by MakeCatalog when it estimates the surface brightness 
limit).
 For the value to @option{--fluxlow} above, you can simply subtract 
@mymath{\sim10} from the surface brightness limit.
 @end cartouche
 
@@ -4238,12 +4238,12 @@ $ rm *.fits *.pdf
 In @ref{FITS images in a publication} we created a ready-to-print 
visualization of the FITS image used in this tutorial.
 However, you rarely want to show a naked image like that!
 You usually want to highlight some objects (that are the target of your 
science) over the image and show different marks for the various types of 
objects you are studying.
-In this tutorial, we'll do just that: select a sub-set of the full catalog of 
clumps, and show them with different marks shapes and colors, while also adding 
some text under each mark.
+In this tutorial, we will do just that: select a sub-set of the full catalog 
of clumps, and show them with different marks shapes and colors, while also 
adding some text under each mark.
 To add coordiantes on the edges of the figure in your paper, see 
@ref{Annotations for figure in paper}.
 
 To start with, let's put a red plus sign over the sub-sample of reddest clumps 
similar to @ref{Reddest clumps cutouts and parallelization}.
-First, we'll need to make the table of marks.
-We'll choose those with a color stronger than 1.5 magnitudes and a 
signal-to-noise ratio (in F160W) larger than 5.
+First, we will need to make the table of marks.
+We Will choose those with a color stronger than 1.5 magnitudes and a 
signal-to-noise ratio (in F160W) larger than 5.
 We also only need the RA, Dec, color and magnitude (in F160W) columns.
 
 @example
@@ -4260,7 +4260,7 @@ $ cd report-image
 @end example
 
 Gnuastro's ConvertType program also has features to add marks over the finally 
produced PDF.
-Below, we'll start with the same @command{astconvertt} command of the previous 
section.
+Below, we will start with the same @command{astconvertt} command of the 
previous section.
 The positions of the marks should be given as a table to the @option{--marks} 
option.
 Two other options are also mandatory: @option{--markcoords} identifies the 
columns that contain the coordinates of each mark and @option{--mode} specifies 
if the coordinates are in image or WCS coordinates.
 
@@ -4272,20 +4272,20 @@ $ astconvertt sb.fits --output=reddest.pdf --fluxlow=20 
\
 
 Open the output @file{reddest.pdf} and see the result.
 You will see relatively thick red circles placed over the given coordinates.
-In your PDF browser, zoom-in to one of the regions, you will see that while 
the pixels of the background image become larger, the lines of these regions 
don't degrade!
+In your PDF browser, zoom-in to one of the regions, you will see that while 
the pixels of the background image become larger, the lines of these regions do 
not degrade!
 This is the concept/power of Vector Graphics: ideal for publication!
 For more on raster (pixelated) and vector (infinite-resolution) graphics, see 
@ref{Raster and Vector graphics}.
 
 We had planned to put a plus-sign on each object.
-However, because we didn't explicitly ask for a certain shape, ConvertType put 
a circle.
+However, because we did not explicitly ask for a certain shape, ConvertType 
put a circle.
 Each mark can have its own separate shape.
 Shapes can be given by a name or a code.
 The full list of available shapes names and codes is given in the description 
of @option{--markshape} option of @ref{Drawing with vector graphics}.
 
 To use a different shape, we need to add a new column to the base table, 
containing the identifier of the desired shape for each mark.
 For example, the code for the plus sign is @code{2}.
-With the commands below, we'll add a new column with this fixed value.
-With the first AWK command we'll make a single-column file, where all the rows 
have the same value.
+With the commands below, we will add a new column with this fixed value.
+With the first AWK command we will make a single-column file, where all the 
rows have the same value.
 We pipe our base table into AWK, so it has the same number of rows.
 With the second command, we concatenate (or append) the new column with Table, 
and give this new column the name @code{SHAPE} (to easily refer to it later and 
not have to count).
 With the third command, we clean-up behind our selves (deleting the extra 
@file{params.txt} file).
@@ -4338,7 +4338,7 @@ $ asttable reddest-cat.fits -cF105W-F160W \
 
 Have a look at the resulting @file{reddest.pdf}.
 You see that the circles are much larger than the plus signs.
-This is because the ``size'' of a cross is defined to be its ful width, but 
for a circle, the value in the size column is the radius.
+This is because the ``size'' of a cross is defined to be its full width, but 
for a circle, the value in the size column is the radius.
 The way each shape interprets the value of the size column is fully described 
under @option{--markshape} of @ref{Drawing with vector graphics}.
 To make them more comparable, let's set the circle sizes to be half of the 
cross sizes.
 
@@ -4359,7 +4359,7 @@ If your terminal supports 24-bit true-color, you can see 
all the colors by runni
 $ astconvertt --listcolors
 @end example
 
-we'll give a ``Sienna'' color for the objects that are fainter than 29th 
magnitude and a ``deeppink'' color to the brighter ones (while keeping the same 
shapes definition as before)
+we will give a ``Sienna'' color for the objects that are fainter than 29th 
magnitude and a ``deeppink'' color to the brighter ones (while keeping the same 
shapes definition as before)
 Since there are many colors, using their codes can make the table hard to read 
by a human!
 So let's use the color names instead of the color codes in the example below 
(this is useful in other columns require strings-only, like the font name).
 
@@ -4416,13 +4416,13 @@ $ astconvertt --showfonts
 
 As you see, there are many ways you can customize each mark!
 The above examples were just the tip of the iceburg!
-But this section has already become long so we'll stop it here (see the box at 
the end of this section for yet another useful example).
+But this section has already become long so we will stop it here (see the box 
at the end of this section for yet another useful example).
 Like above, each feature of a mark can be controlled with a column in the 
table of mark information.
 Please see in @ref{Drawing with vector graphics} for the full list of 
columns/features that you can use.
 
 @cartouche
 @noindent
-@strong{Drawing ellipses:} With the commands below, you can measure the 
elliptical properties of the objects and visualized them in a ready-to-publish 
PDF (we'll only show the ellipses of the largest clumps):
+@strong{Drawing ellipses:} With the commands below, you can measure the 
elliptical properties of the objects and visualized them in a ready-to-publish 
PDF (we will only show the ellipses of the largest clumps):
 @example
 $ astmkcatalog ../seg/xdf-f160w.fits --ra --dec --semimajor \
                --axisratio --positionangle --clumpscat \
@@ -4444,7 +4444,7 @@ To conclude this section, let us highlight an important 
factor to consider in ve
 In ConvertType, things like line width or font size are defined in units of 
``point''s.
 In vector graphics standards, 72 ``point''s correspond to one 1 inch.
 Therefore, one way you can change these factors for all the objects is to 
assign a larger or smaller print size to the image.
-The print size is just a meta-data entry, and won't affect the file's volume 
in bytes!
+The print size is just a meta-data entry, and will not affect the file's 
volume in bytes!
 You can do this with the @option{--widthincm} option.
 Try adding this option and giving it very different values like @code{5} or 
@code{30}.
 
@@ -4452,7 +4452,7 @@ Try adding this option and giving it very different 
values like @code{5} or @cod
 @subsection Writing scripts to automate the steps
 
 In the previous sub-sections, we went through a series of steps like 
downloading the necessary datasets (in @ref{Setup and data download}), 
detecting the objects in the image, and finally selecting a particular subset 
of them to inspect visually (in @ref{Reddest clumps cutouts and 
parallelization}).
-To benefit most effectively from this subsection, please go through the 
previous sub-sections, and if you haven't actually done them, we recommended to 
do/run them before continuing here.
+To benefit most effectively from this subsection, please go through the 
previous sub-sections, and if you have not actually done them, we recommended 
to do/run them before continuing here.
 
 @cindex @command{history}
 @cindex Shell history
@@ -4466,7 +4466,7 @@ $ history
 @end example
 
 @cindex GNU Bash
-Try it in your teminal to see for your self.
+Try it in your teminal to see for yourself.
 By default in GNU Bash, it shows the last 500 commands.
 You can also save this ``history'' of previous commands to a file using shell 
redirection (to have it after your next 500 commands), with this command
 
@@ -4476,7 +4476,7 @@ $ history > my-previous-commands.txt
 
 This is a good way to temporarily keep track of every single command you ran.
 But in the middle of all the useful commands, you will have many extra 
commands, like tests that you did before/after the good output of a step (that 
you decided to continue working on), or an unrelated job you had to do in the 
middle of this project.
-Because of these impurities, after a few days (that you have forgot the 
context: tests you didn't end-up using, or unrelated jobs) reading this full 
history will be very frustrating.
+Because of these impurities, after a few days (that you have forgot the 
context: tests you did not end-up using, or unrelated jobs) reading this full 
history will be very frustrating.
 
 Keeping the final commands that were used in each step of an analysis is a 
common problem for anyone who is doing something serious with the computer.
 But simply keeping the most important commands in a text file is not enough, 
the small steps in the middle (like making a directory to keep the outputs of 
one step) are also important.
@@ -4484,7 +4484,7 @@ In other words, the only way you can be sure that you are 
under control of your
 
 @cindex Shell script
 @cindex Script, shell
-Fortunately, typing commands interactively with your fingers isn't the only 
way to operate the shell.
+Fortunately, typing commands interactively with your fingers is not the only 
way to operate the shell.
 The shell can also take its orders/commands from a plain-text file, which is 
called a @emph{script}.
 When given a script, the shell will read it line-by-line as if you have 
actually typed it manually.
 
@@ -4516,13 +4516,13 @@ When confronted with these characters, the script will 
be interpreted with the p
 In this case, we want to write a shell script and the most common shell 
program is GNU Bash which is installed in @file{/bin/bash}.
 So the first line of your script should be `@code{#!/bin/bash}'@footnote{
 When the script is to be run by the same shell that is calling it (like this 
script), the shebang is optional.
-But it is still recommended, because it ensures that even if the user isn't 
using GNU Bash, the script will be run in GNU Bash: given the differences 
between various shells, writing truely portable shell scripts, that can be run 
by many shell programs/implementations, isn't easy (sometimes not possible!).}.
+But it is still recommended, because it ensures that even if the user is not 
using GNU Bash, the script will be run in GNU Bash: given the differences 
between various shells, writing truely portable shell scripts, that can be run 
by many shell programs/implementations, is not easy (sometimes not possible!).}.
 
 It may happen (rarely) that GNU Bash is in another location on your system.
 In other cases, you may prefer to use a non-standard version of Bash installed 
in another location (that has higher priority in your @code{PATH}, see 
@ref{Installation directory}).
 In such cases, you can use the `@code{#!/usr/bin/env bash}' shebang instead.
 Through the @code{env} program, this shebang will look in your @code{PATH} and 
use the first @command{bash} it finds to run your script.
-But for simplicity in the rest of the tutorial, we'll continue with the 
`@code{#!/bin/bash}' shebang.
+But for simplicity in the rest of the tutorial, we will continue with the 
`@code{#!/bin/bash}' shebang.
 
 Using your favorite text editor, make a new empty file, let's call it 
@file{my-first-script.sh}.
 Write the GNU Bash shebang (above) as its first line
@@ -4558,7 +4558,7 @@ $ chmod +x my-first-script.sh
 
 Your script is now ready to run/execute the series of commands.
 To run it, you should call it while specifying its location in the file system.
-Since you are currently in the same directory as the script, its easiest to 
use relative addressing like below (where `@code{./}' means the current 
directory).
+Since you are currently in the same directory as the script, it is easiest to 
use relative addressing like below (where `@code{./}' means the current 
directory).
 But before running your script, first delete the two @file{a.txt} and 
@file{a.fits} files that were created when you interactively ran the commands.
 
 @example
@@ -4597,11 +4597,11 @@ aststatistics a.fits
 @end example
 
 @noindent
-Isn't this much more easier to read now?
+Is Not this much more easier to read now?
 Comments help to provide human-friendly context to the raw commands.
 At the time you make a script, comments may seem like an extra effort and slow 
you down.
 But in one year, you will forget almost everything about your script and you 
will appreciate the effort so much!
-Think of the comments as an email to your future-self and always put a 
well-written description of the context/purpose (most importantly, things that 
aren't directly clear by reading the commands) in your scripts.
+Think of the comments as an email to your future-self and always put a 
well-written description of the context/purpose (most importantly, things that 
are not directly clear by reading the commands) in your scripts.
 
 The example above was very basic and mostly redundant series of commands, to 
show the basic concepts behind scripts.
 You can put any (arbitrarily long and complex) series of commands in a script 
by following the two rules: 1) add a shebang, and 2) enable the executable flag.
@@ -4668,7 +4668,7 @@ astcrop --mode=wcs -h0 --output=$f160w_flat \
 
 The first thing you may notice is that even if you already have the downloaded 
input images, this script will always try to re-download them.
 Also, if you re-run the script, you will notice that @command{mkdir} prints an 
error message that the download directory already exists.
-Therefore, the script above isn't too useful and some modifications are 
necessary to make it more generally useful.
+Therefore, the script above is not too useful and some modifications are 
necessary to make it more generally useful.
 Here are some general tips that are often very useful when writing scripts:
 
 @table @strong
@@ -4693,10 +4693,10 @@ else
 fi
 @end example
 To check the existance of a directory instead of a file, use @code{-d} instead 
of @code{-f}.
-To negate a conditional, use `@code{!}' and note that conditionals can be 
written in one line also (useful for when its short).
+To negate a conditional, use `@code{!}' and note that conditionals can be 
written in one line also (useful for when it is short).
 
 One common scenario that you'll need to check the existance of directories is 
when you are making them: the default @command{mkdir} command will crash if the 
desired directory already exists.
-On some systems (including GNU/Linux distributions), @code{mkdir} has options 
to deal with such cases. But if you want your script to be portable, its best 
to check yourself like below:
+On some systems (including GNU/Linux distributions), @code{mkdir} has options 
to deal with such cases. But if you want your script to be portable, it is best 
to check yourself like below:
 
 @example
 if ! [ -d DIRNAME ]; then mkdir DIRNAME; fi
@@ -4772,9 +4772,9 @@ fi
 @subsection Citing and acknowledging Gnuastro
 In conclusion, we hope this extended tutorial has been a good starting point 
to help in your exciting research.
 If this book or any of the programs in Gnuastro have been useful for your 
research, please cite the respective papers, and acknowledge the funding 
agencies that made all of this possible.
-Without citations, we won't be able to secure future funding to continue 
working on Gnuastro or improving it, so please take software citation seriously 
(for all the scientific software you use, not just Gnuastro).
+Without citations, we will not be able to secure future funding to continue 
working on Gnuastro or improving it, so please take software citation seriously 
(for all the scientific software you use, not just Gnuastro).
 
-To help you in this aspect is well, all Gnuastro programs have a 
@option{--cite} option to facilitate the citation and acknowledgment.
+To help you in this, all Gnuastro programs have a @option{--cite} option to 
facilitate the citation and acknowledgment.
 Just note that it may be necessary to cite additional papers for different 
programs, so please try it out on all the programs that you used, for example:
 
 @example
@@ -4795,12 +4795,12 @@ $ astnoisechisel --cite
 The outer wings of large and extended objects can sink into the noise very 
gradually and can have a large variety of shapes (for example due to tidal 
interactions).
 Therefore separating the outer boundaries of the galaxies from the noise can 
be particularly tricky.
 Besides causing an under-estimation in the total estimated brightness of the 
target, failure to detect such faint wings will also cause a bias in the noise 
measurements, thereby hampering the accuracy of any measurement on the dataset.
-Therefore even if they don't constitute a significant fraction of the target's 
light, or aren't your primary target, these regions must not be ignored.
-In this tutorial, we'll walk you through the strategy of detecting such 
targets using @ref{NoiseChisel}.
+Therefore even if they do not constitute a significant fraction of the 
target's light, or are not your primary target, these regions must not be 
ignored.
+In this tutorial, we will walk you through the strategy of detecting such 
targets using @ref{NoiseChisel}.
 
 @cartouche
 @noindent
-@strong{Don't start with this tutorial:} If you haven't already completed 
@ref{General program usage tutorial}, we strongly recommend going through that 
tutorial before starting this one.
+@strong{Do Not start with this tutorial:} If you have not already completed 
@ref{General program usage tutorial}, we strongly recommend going through that 
tutorial before starting this one.
 Basic features like access to this book on the command-line, the configuration 
files of Gnuastro's programs, benefiting from the modular nature of the 
programs, viewing multi-extension FITS files, or using NoiseChisel's outputs 
are discussed in more detail there.
 @end cartouche
 
@@ -4808,9 +4808,9 @@ Basic features like access to this book on the 
command-line, the configuration f
 @cindex NGC5195
 @cindex SDSS, Sloan Digital Sky Survey
 @cindex Sloan Digital Sky Survey, SDSS
-We'll try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
-We'll use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
-Due to its more peculiar low surface brightness structure/features, we'll 
focus on the dwarf companion galaxy of the group (or NGC 5195).
+We Will try to detect the faint tidal wings of the beautiful M51 
group@footnote{@url{https://en.wikipedia.org/wiki/M51_Group}} in this tutorial.
+We Will use a dataset/image from the public @url{http://www.sdss.org/, Sloan 
Digital Sky Survey}, or SDSS.
+Due to its more peculiar low surface brightness structure/features, we will 
focus on the dwarf companion galaxy of the group (or NGC 5195).
 
 
 @menu
@@ -4825,20 +4825,20 @@ Due to its more peculiar low surface brightness 
structure/features, we'll focus
 @node Downloading and validating input data, NoiseChisel optimization, 
Detecting large extended targets, Detecting large extended targets
 @subsection Downloading and validating input data
 
-To get the image, you can use SDSS's @url{https://dr12.sdss.org/fields, Simple 
field search} tool.
+To get the image, you can use the @url{https://dr12.sdss.org/fields, simple 
field search} tool of SDSS.
 As long as it is covered by the SDSS, you can find an image containing your 
desired target either by providing a standard name (if it has one), or its 
coordinates.
 To access the dataset we will use here, write @code{NGC5195} in the ``Object 
Name'' field and press ``Submit'' button.
 
 @cartouche
 @noindent
 @strong{Type the example commands:} Try to type the example commands on your 
terminal and use the history feature of your command-line (by pressing the 
``up'' button to retrieve previous commands).
-Don't simply copy and paste the commands shown here.
+Do Not simply copy and paste the commands shown here.
 This will help simulate future situations when you are processing your own 
datasets.
 @end cartouche
 
 @cindex GNU Wget
 You can see the list of available filters under the color image.
-For this demonstration, we'll use the r-band filter image.
+For this demonstration, we will use the r-band filter image.
 By clicking on the ``r-band FITS'' link, you can download the image.
 Alternatively, you can just run the following command to download it with GNU 
Wget@footnote{To make the command easier to view on screen or in a page, we 
have defined the top URL of the image as the @code{topurl} shell variable.
 You can just replace the value of this variable with @code{$topurl} in the 
@command{wget} command.}.
@@ -4852,7 +4852,7 @@ $ 
topurl=https://dr12.sdss.org/sas/dr12/boss/photoObj/frames
 $ wget $topurl/301/3716/6/frame-r-003716-6-0117.fits.bz2 -Or.fits.bz2
 @end example
 
-When you want to reproduce a previous result (a known analysis, on a known 
dataset, to get a known result: like the case here!) it is important to verify 
that the file is correct: that the input file hasn't changed (on the remote 
server, or in your own archive), or there was no downloading problem.
+When you want to reproduce a previous result (a known analysis, on a known 
dataset, to get a known result: like the case here!) it is important to verify 
that the file is correct: that the input file has not changed (on the remote 
server, or in your own archive), or there was no downloading problem.
 Otherwise, if the data have changed in your server/archive, and you use the 
same script, you will get a different result, causing a lot of confusion!
 
 @cindex Checksum
@@ -4873,7 +4873,7 @@ If it opens and you see the image of M51 and NGC5195, 
then there was no download
 In this case, please contact us at @code{bug-gnuastro@@gnu.org}.}).
 To get a better feeling of checksums open your favorite text editor and make a 
test file by writing something in it.
 Save it and calculate the text file's SHA-1 checksum with @command{sha1sum}.
-Try renaming that file, and you'll see the checksum hasn't changed (checksums 
only look into the contents, not the name/location of the file).
+Try renaming that file, and you'll see the checksum has not changed (checksums 
only look into the contents, not the name/location of the file).
 Then open the file with your text editor again, make a change and re-calculate 
its checksum, you'll see the checksum string has changed.
 
 Its always good to keep this short checksum string with your project's scripts 
and validate your input data before using them.
@@ -4895,10 +4895,10 @@ fi
 @noindent
 Now that we know you have the same data that we wrote this tutorial with, 
let's continue.
 The SDSS server keeps the files in a Bzip2 compressed file format (that have a 
@code{.bz2} suffix).
-So we'll first decompress it with the following command to use it as a normal 
FITS file.
+So we will first decompress it with the following command to use it as a 
normal FITS file.
 By convention, compression programs delete the original file (compressed when 
uncompressing, or uncompressed when compressing).
 To keep the original file, you can use the @option{--keep} or @option{-k} 
option which is available in most compression programs for this job.
-Here, we don't need the compressed file any more, so we'll just let 
@command{bunzip} delete it for us and keep the directory clean.
+Here, we do not need the compressed file any more, so we will just let 
@command{bunzip} delete it for us and keep the directory clean.
 
 @example
 $ bunzip2 r.fits.bz2
@@ -4934,7 +4934,7 @@ Generally, any time your target is much larger than the 
tile size and the signal
 Therefore, when there are large objects in the dataset, @strong{the best 
place} to check the accuracy of your detection is the estimated Sky image.
 
 When dominated by the background, noise has a symmetric distribution.
-However, signal is not symmetric (we don't have negative signal).
+However, signal is not symmetric (we do not have negative signal).
 Therefore when non-constant@footnote{by constant, we mean that it has a single 
value in the region we are measuring.} signal is present in a noisy dataset, 
the distribution will be positively skewed.
 For a demonstration, see Figure 1 of @url{https://arxiv.org/abs/1505.01664, 
Akhlaghi and Ichikawa [2015]}.
 This skewness is a good measure of how much faint signal we have in the 
distribution.
@@ -4942,7 +4942,7 @@ The skewness can be accurately measured by the difference 
in the mean and median
 This important concept will be discussed more extensively in the next section 
(@ref{Skewness caused by signal and its measurement}).
 
 However, skewness is only a proxy for signal when the signal has structure 
(varies per pixel).
-Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the constant signal's effect is just to shift the symmetric center 
of the noise distribution to the positive and there won't be any skewness 
(major difference between the mean and median).
+Therefore, when it is approximately constant over a whole tile, or sub-set of 
the image, the constant signal's effect is just to shift the symmetric center 
of the noise distribution to the positive and there will not be any skewness 
(major difference between the mean and median).
 This positive@footnote{In processed images, where the Sky value can be 
over-estimated, this constant shift can be negative.} shift that preserves the 
symmetric distribution is the Sky value.
 When there is a gradient over the dataset, different tiles will have different 
constant shifts/Sky-values, for example see Figure 11 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
 
@@ -4964,9 +4964,9 @@ When you call any of NoiseChisel's @option{--check*} 
options, by default, it wil
 This allows you to focus on the problem you wanted to check as soon as 
possible (you can disable this feature with the @option{--continueaftercheck} 
option).
 
 To optimize the threshold-related settings for this image, let's play with 
this quantile threshold check image a little.
-Don't forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
+Do Not forget that ``@emph{Good statistical analysis is not a purely routine 
matter, and generally calls for more than one pass through the computer}'' 
(Anscombe 1973, see @ref{Science and its tools}).
 A good scientist must have a good understanding of her tools to make a 
meaningful analysis.
-So don't hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset (from a new instrument) in front of you.
+So do not hesitate in playing with the default configuration and reviewing the 
manual when you have a new dataset (from a new instrument) in front of you.
 Robust data analysis is an art, therefore a good scientist must first be a 
good artist.
 So let's open the check image as a multi-extension cube:
 
@@ -4998,9 +4998,9 @@ Let's get back to the problem of optimizing the result.
 You have two strategies for detecting the outskirts of the merging galaxies:
 1) Increase the tile size to get more accurate measurements of skewness.
 2) Strengthen the outlier rejection parameters to discard more of the tiles 
with signal.
-Fortunately in this image we have a sufficiently large region on the right of 
the image that the galaxy doesn't extend to.
+Fortunately in this image we have a sufficiently large region on the right 
side of the image that the galaxy does not extend to.
 So we can use the more robust first solution.
-In situations where this doesn't happen (for example if the field of view in 
this image was shifted to the left to have more of M51 and less sky) you are 
limited to a combination of the two solutions or just to the second solution.
+In situations where this does not happen (for example if the field of view in 
this image was shifted to the left to have more of M51 and less sky) you are 
limited to a combination of the two solutions or just to the second solution.
 
 @cartouche
 @noindent
@@ -5017,7 +5017,7 @@ $ astnoisechisel r.fits -h0 --tilesize=100,100 
--checkqthresh
 $ ds9 -mecube r_qthresh.fits -zscale -cmap sls -zoom to fit
 @end example
 
-You can clearly see the effect of this increased tile size: the tiles are much 
larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that all the 
tiles are nicely grouped on the right side of the image (the farthest from M51, 
where we don't see a gradient in @code{QTHRESH_ERODE}).
+You can clearly see the effect of this increased tile size: the tiles are much 
larger and when you look into @code{VALUE1_NO_OUTLIER}, you see that all the 
tiles are nicely grouped on the right side of the image (the farthest from M51, 
where we do not see a gradient in @code{QTHRESH_ERODE}).
 Things look good now, so let's remove @option{--checkqthresh} and let 
NoiseChisel proceed with its detection.
 
 @example
@@ -5026,7 +5026,7 @@ $ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to 
fit
 @end example
 
 The detected pixels of the @code{DETECTIONS} extension have expanded a little, 
but not as much.
-Also, the gradient in the @code{SKY} image is almost fully removed (and 
doesn't fall over M51 anymore).
+Also, the gradient in the @code{SKY} image is almost fully removed (and does 
not fall over M51 anymore).
 However, on the bottom-right of the m51 detection, we see many holes gradually 
increasing in size.
 This hints that there is still signal out there.
 Let's check the next series of detection steps by adding the 
@code{--checkdetection} option this time:
@@ -5040,7 +5040,7 @@ $ ds9 -mecube r_detcheck.fits -zscale -cmap sls -zoom to 
fit
 The output now has 16 extensions, showing every step that is taken by 
NoiseChisel.
 The first and second (@code{INPUT} and @code{CONVOLVED}) are clear from their 
names.
 The third (@code{THRESHOLDED}) is the thresholded image after finding the 
quantile threshold (last extension of the output of @code{--checkqthresh}).
-The fourth HDU (@code{ERODED}) is new: its the name-stake of NoiseChisel, or 
eroding pixels that are above the threshold.
+The fourth HDU (@code{ERODED}) is new: it is the name-stake of NoiseChisel, or 
eroding pixels that are above the threshold.
 By erosion, we mean that all pixels with a value of @code{1} (above the 
threshold) that are touching a pixel with a value of @code{0} (below the 
threshold) will be flipped to zero (or ``carved'' out)@footnote{Pixels with a 
value of @code{2} are very high signal-to-noise pixels, they are not eroded, to 
preserve sharp and bright sources.}.
 You can see its effect directly by going back and forth between the 
@code{THRESHOLDED} and @code{ERODED} extensions.
 
@@ -5060,7 +5060,7 @@ $ astnoisechisel r.fits -h0 --tilesize=100,100 -P | grep 
erode
 We see that the value of @code{erode} is @code{2}.
 The default NoiseChisel parameters are primarily targeted to processed images 
(where there is correlated noise due to all the processing that has gone into 
the warping and stacking of raw images, see @ref{NoiseChisel optimization for 
detection}).
 In those scenarios 2 erosions are commonly necessary.
-But here, we have a single-exposure image where there is no correlated noise 
(the pixels aren't mixed).
+But here, we have a single-exposure image where there is no correlated noise 
(the pixels are not mixed).
 So let's see how things change with only one erosion:
 
 @example
@@ -5095,7 +5095,7 @@ Its value is the ultimate limit of the growth in units of 
quantile (between 0 an
 Therefore @option{--detgrowquant=1} means no growth and 
@option{--detgrowquant=0.5} means an ultimate limit of the Sky level (which is 
usually too much and will cover the whole image!).
 See Figure 2 of arXiv:@url{https://arxiv.org/pdf/1909.11230.pdf,1909.11230} 
for more on this option.
 Try running the previous command with various values (from 0.6 to higher 
values) to see this option's effect on this dataset.
-For this particularly huge galaxy (with signal that extends very gradually 
into the noise), we'll set it to @option{0.75}:
+For this particularly huge galaxy (with signal that extends very gradually 
into the noise), we will set it to @option{0.75}:
 
 @example
 $ astnoisechisel r.fits -h0 --tilesize=100,100 --erode=1 \
@@ -5104,7 +5104,7 @@ $ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to 
fit
 @end example
 
 Beyond this level (smaller @option{--detgrowquant} values), you see many of 
the smaller background galaxies (towards the right side of the image) starting 
to create thin spider-leg-like features, showing that we are following 
correlated noise for too much.
-Please try it for your self by changing it to @code{0.6} for example.
+Please try it for yourself by changing it to @code{0.6} for example.
 
 When you look at the @code{DETECTIONS} extension of the command shown above, 
you see the wings of the galaxy being detected much farther out, But you also 
see many holes which are clearly just caused by noise.
 After growing the objects, NoiseChisel also allows you to fill such holes when 
they are smaller than a certain size through the @option{--detgrowmaxholesize} 
option.
@@ -5116,24 +5116,24 @@ $ astnoisechisel r.fits -h0 --tilesize=100,100 
--erode=1 \
 $ ds9 -mecube r_detected.fits -zscale -cmap sls -zoom to fit
 @end example
 
-When looking at the raw input image (which is very ``shallow'': less than a 
minute exposure!), you don't see anything so far out of the galaxy.
+When looking at the raw input image (which is very ``shallow'': less than a 
minute exposure!), you do not see anything so far out of the galaxy.
 You might just think to yourself that ``this is all noise, I have just dug too 
deep and I'm following systematics''!
 If you feel like this, have a look at the deep images of this system in 
@url{https://arxiv.org/abs/1501.04599, Watkins et al. [2015]}, or a 12 hour 
deep image of this system (with a 12-inch telescope):
 @url{https://i.redd.it/jfqgpqg0hfk11.jpg}@footnote{The image is taken from 
this Reddit discussion: 
@url{https://www.reddit.com/r/Astronomy/comments/9d6x0q/12_hours_of_exposure_on_the_whirlpool_galaxy/}}.
-In these deeper images you clearly see how the outer edges of the M51 group 
follow this exact structure, below in @ref{Achieved surface brightness level}, 
we'll measure the exact level.
+In these deeper images you clearly see how the outer edges of the M51 group 
follow this exact structure, below in @ref{Achieved surface brightness level}, 
we will measure the exact level.
 
 As the gradient in the @code{SKY} extension shows, and the deep images cited 
above confirm, the galaxy's signal extends even beyond this.
 But this is already far deeper than what most (if not all) other tools can 
detect.
-Therefore, we'll stop configuring NoiseChisel at this point in the tutorial 
and let you play with the other options a little more, while reading more about 
it in the papers (@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}) and 
@ref{NoiseChisel}.
+Therefore, we will stop configuring NoiseChisel at this point in the tutorial 
and let you play with the other options a little more, while reading more about 
it in the papers (@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]} and @url{https://arxiv.org/abs/1909.11230, Akhlaghi [2019]}) and 
@ref{NoiseChisel}.
 When you do find a better configuration feel free to contact us for feedback.
-Don't forget that good data analysis is an art, so like a sculptor, master 
your chisel for a good result.
+Do Not forget that good data analysis is an art, so like a sculptor, master 
your chisel for a good result.
 
 To avoid typing all these options every time you run NoiseChisel on this 
image, you can use Gnuastro's configuration files, see @ref{Configuration 
files}.
 For an applied example of setting/using them, see @ref{Option management and 
configuration files}.
 
 @cartouche
 @noindent
-@strong{This NoiseChisel configuration is NOT GENERIC:} Don't use the 
configuration derived above, on another instrument's image @emph{blindly}.
+@strong{This NoiseChisel configuration is NOT GENERIC:} Do Not use the 
configuration derived above, on another instrument's image @emph{blindly}.
 If you are unsure, just use the default values.
 As you saw above, the reason we chose this particular configuration for 
NoiseChisel to detect the wings of the M51 group was strongly influenced by the 
noise properties of this particular image.
 Remember @ref{NoiseChisel optimization for detection}, where we looked into 
the very deep XDF image which had strong correlated noise?
@@ -5145,7 +5145,7 @@ But for images from other instruments, please follow a 
similar logic to what was
 @cartouche
 @noindent
 @strong{Smart NoiseChisel:} As you saw during this section, there is a clear 
logic behind the optimal parameter value for each dataset.
-Therefore, we plan to capabilities to (optionally) automate some of the 
choices made here based on the actual dataset, please join us in doing this if 
you are interested.
+Therefore, we plan to add capabilities to (optionally) automate some of the 
choices made here based on the actual dataset, please join us in doing this if 
you are interested.
 However, given the many problems in existing ``smart'' solutions, such 
automatic changing of the configuration may cause more problems than they solve.
 So even when they are implemented, we would strongly recommend quality checks 
for a robust analysis.
 @end cartouche
@@ -5290,7 +5290,7 @@ Histogram:
 The improvement is obvious: the ASCII histogram better shows the pixel values 
near the noise level.
 We can now compare with the distribution of @file{det-masked.fits} that we 
found earlier.
 The ASCII histogram of @file{det-masked.fits} was approximately symmetric, 
while this is asymmetric in this range, especially in outer (to the right, or 
positive) direction.
-The heavier right-side tail is a clear visual demonstration of skewness that 
is cased by the signal in the un-masked image.
+The heavier right-side tail is a clear visual demonstration of skewness that 
is caused by the signal in the un-masked image.
 
 Having visually confirmed the skewness, let's quantify it with Pearson's first 
skewness coefficient.
 Like before, we can simply use Gnuastro's Statistics and AWK for the 
measurement and calculation:
@@ -5304,15 +5304,15 @@ $ aststatistics r_detected.fits --mean --median --std \
 The difference between the mean and median is now approximately 
@mymath{0.12\sigma}.
 This is larger than the skewness of the masked image (which was approximately 
@mymath{0.08\sigma}).
 At a glance (only looking at the numbers), it seems that there is not much 
difference between the two distributions.
-However, visually looking at the non-masked image, or the ASCII histogram, you 
would expect the quantified skewness to be much larger than that of the masked 
image, but that hasn't happened!
+However, visually looking at the non-masked image, or the ASCII histogram, you 
would expect the quantified skewness to be much larger than that of the masked 
image, but that has not happened!
 Why is that?
 
-The reason is that the presence of signal doesn't only shift the mean and 
median, it @emph{also} increases the standard deviation!
-To see this for yourself, compare the standard deviation of 
@file{det-masked.fits} (which was approximately @mymath{0.025}) 
@file{r_detected.fits} (without @option{--lessthan}; which was approximately 
@mymath{0.699}).
+The reason is that the presence of signal does not only shift the mean and 
median, it @emph{also} increases the standard deviation!
+To see this for yourself, compare the standard deviation of 
@file{det-masked.fits} (which was approximately @mymath{0.025}) to 
@file{r_detected.fits} (without @option{--lessthan}; which was approximately 
@mymath{0.699}).
 The latter is almost 28 times larger!
 
 This happens because the standard deviation is defined only in a symmetric 
(and Gaussian) distribution.
-In a non-Gaussian distribution, the standard deviation is poorly defined and 
isn't a good measure of ``width''.
+In a non-Gaussian distribution, the standard deviation is poorly defined and 
is not a good measure of ``width''.
 Since Pearson's first skewness coefficient is defined in units of the standard 
deviation, this very large increase in the standard deviation has hidden the 
much increased distance between the mean and median after adding signal.
 
 @cindex Quantile
@@ -5352,9 +5352,9 @@ In case you would like to know more about the usage of 
the quantile of the mean
 @subsection Image surface brightness limit
 @cindex Surface brightness limit
 @cindex Limit, surface brightness
-When your science is related to extended emission (like the example here) and 
you are presenting your results in a scientific conference, usually the first 
thing that someone will ask (if you don't explicitly say it!), is the dataset's 
@emph{surface brightness limit} (a standard measure of the noise level), and 
your target's surface brightness (a measure of the signal, either in the center 
or outskirts, depending on context).
+When your science is related to extended emission (like the example here) and 
you are presenting your results in a scientific conference, usually the first 
thing that someone will ask (if you do not explicitly say it!), is the 
dataset's @emph{surface brightness limit} (a standard measure of the noise 
level), and your target's surface brightness (a measure of the signal, either 
in the center or outskirts, depending on context).
 For more on the basics of these important concepts please see @ref{Quantifying 
measurement limits}).
-So in this section of the tutorial, we'll measure these values for this image 
and this target.
+So in this section of the tutorial, we will measure these values for this 
image and this target.
 
 @noindent
 Before measuring the surface brightness limit, let's see how reliable our 
detection was.
@@ -5368,7 +5368,7 @@ $ aststatistics det-masked.fits --quantofmean
 @noindent
 Showing that the mean is indeed very close to the median, although just about 
1 quantile larger.
 As we saw in @ref{NoiseChisel optimization}, a very small residual signal 
still remains in the undetected regions and this very small difference is a 
quantitative measure of that undetected signal.
-It was up to you as an exercise to improve it, so we'll continue with this 
dataset.
+It was up to you as an exercise to improve it, so we will continue with this 
dataset.
 
 The surface brightness limit of the image can be measured from the masked 
image and the equation in @ref{Quantifying measurement limits}.
 Let's do it for a @mymath{3\sigma} surface brightness limit over an area of 
@mymath{25 \rm{arcsec}^2}:
@@ -5398,7 +5398,7 @@ $ astfits r_detected.fits --hdu=SKY_STD | grep 'M..STD'
 @end example
 
 The @code{MEDSTD} value is very similar to the standard deviation derived 
above, so we can safely use it instead of having to mask and run Statistics.
-In fact, MakeCatalog also uses this keyword and will report the dataset's 
@mymath{n\sigma} surface brightness limit as keywords in the output (not as 
measurement columns, since its related to the noise, not labeled signal):
+In fact, MakeCatalog also uses this keyword and will report the dataset's 
@mymath{n\sigma} surface brightness limit as keywords in the output (not as 
measurement columns, since it is related to the noise, not labeled signal):
 
 @example
 $ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
@@ -5409,12 +5409,12 @@ $ astmkcatalog r_detected.fits -hDETECTIONS 
--output=sbl.fits \
 Before looking into the measured surface brightness limits, let's review some 
important points about this call to MakeCatalog first:
 @itemize
 @item
-We are only concerned with the noise (not the signal), so we don't ask for any 
further measurements, because they can un-necessarily slow it down.
-However, MakeCatalog requires at least one column, so we'll only ask for the 
@option{--ids} column (which doesn't need any measurement!).
+We are only concerned with the noise (not the signal), so we do not ask for 
any further measurements, because they can un-necessarily slow it down.
+However, MakeCatalog requires at least one column, so we will only ask for the 
@option{--ids} column (which does not need any measurement!).
 The output catalog will therefore have a single row and a single column, with 
1 as its value@footnote{Recall that NoiseChisel's output is a binary image: 
0-valued pixels are noise and 1-valued pixel are signal.
-NoiseChisel doesn't identify sub-structure over the signal, this is the job of 
Segment, see @ref{Extract clumps and objects}.}.
+NoiseChisel does not identify sub-structure over the signal, this is the job 
of Segment, see @ref{Extract clumps and objects}.}.
 @item
-If we don't ask for any noise-related column (for example the signal-to-noise 
ratio column with @option{--sn}, among other noise-related columns), 
MakeCatalog is not going to read the noise standard deviation image (again, to 
speed up its operation when it is redundant).
+If we do not ask for any noise-related column (for example the signal-to-noise 
ratio column with @option{--sn}, among other noise-related columns), 
MakeCatalog is not going to read the noise standard deviation image (again, to 
speed up its operation when it is redundant).
 We are thus using the @option{--forcereadstd} option (short for ``force read 
standard deviation image'') here so it is ready for the surface brightness 
limit measurements that are written as keywords.
 @end itemize
 
@@ -5441,7 +5441,7 @@ The surface brightness limiting magnitude within a pixel 
(@code{SBLNSIG}) and wi
 @cindex Nanomaggy
 @cindex Zero point magnitude
 You will notice that the two surface brightness limiting magnitudes above have 
values around 3 and 4 (which is not correct!).
-This is because we haven't given a zero point magnitude to MakeCatalog, so it 
uses the default value of @code{0}.
+This is because we have not given a zero point magnitude to MakeCatalog, so it 
uses the default value of @code{0}.
 SDSS image pixel values are calibrated in units of ``nanomaggy'' which are 
defined to have a zero point magnitude of 22.5@footnote{From 
@url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
 So with the first command below we give the zero point value and with the 
second we can see the surface brightness limiting magnitudes with the correct 
values (around 25 and 26)
 
@@ -5514,7 +5514,7 @@ astmkcatalog lab.fits -h1 --zeropoint=$zeropoint 
-osbl.fits \
 @end example
 
 The @file{sbl.fits} catalog now contains the upper-limit surface brightness 
for a circle with an area of 25 arcsec@mymath{^2}.
-You can check the value with the command below, but the great thing is that 
now you have both the surface brightness limiting magnitude in the headers 
discussed above, and the upper-limit surface brightness within the table.
+You can check the value with the command below, but the great thing is that 
now you have both of the surface brightness limiting magnitude in the headers 
discussed above, and the upper-limit surface brightness within the table.
 You can also add more profiles with different shapes and sizes if necessary.
 Of course, you can also use @option{--upperlimitsb} in your actual science 
objects and clumps to get an object-specific or clump-specific value.
 
@@ -5532,7 +5532,7 @@ Please try exactly the same MakeCatalog command above a 
few times to see how it
 
 This is because of the @emph{random} factor in the upper-limit measurements: 
every time you run it, different random points will be checked, resulting in a 
slightly different distribution.
 You can decrease the random scatter by increasing the number of random checks 
(for example setting @option{--upnum=100000}, compared to 1000 in the command 
above).
-But this will be slower and the results won't be exactly reproducible.
+But this will be slower and the results will not be exactly reproducible.
 The only way to ensure you get an identical result later is to fix the random 
number generator function and seed like the command below@footnote{You can use 
any integer for the seed. One recommendation is to run MakeCatalog without 
@option{--envseed} once and use the randomly generated seed that is printed on 
the terminal.}.
 This is a very important point regarding any statistical process involving 
random numbers, please see @ref{Generating random numbers}.
 
@@ -5577,7 +5577,7 @@ In @ref{NoiseChisel optimization} we customized 
NoiseChisel for a single-exposur
 In this section, let's do some measurements on the outer-most edges of the M51 
group to see how they relate to the noise measurements found in the previous 
section.
 
 @cindex Opening
-For this measurement, we'll need to estimate the average flux on the outer 
edges of the detection.
+For this measurement, we will need to estimate the average flux on the outer 
edges of the detection.
 Fortunately all this can be done with a few simple commands using 
@ref{Arithmetic} and @ref{MakeCatalog}.
 First, let's separate each detected region, or give a unique label/counter to 
all the connected pixels of NoiseChisel's detection map with the command below.
 Recall that with the @code{set-} operator, the popped operand will be given a 
name (@code{det} in this case) for easy usage later.
@@ -5590,7 +5590,7 @@ $ astarithmetic r_detected.fits -hDETECTIONS set-det \
 You can find the label of the main galaxy visually (by opening the image and 
hovering your mouse over the M51 group's label).
 But to have a little more fun, let's do this automatically (which is necessary 
in a general scenario).
 The M51 group detection is by far the largest detection in this image, this 
allows us to find its ID/label easily.
-We'll first run MakeCatalog to find the area of all the labels, then we'll use 
Table to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
+We Will first run MakeCatalog to find the area of all the labels, then we will 
use Table to find the ID of the largest object and keep it as a shell variable 
(@code{id}):
 
 @example
 # Run MakeCatalog to find the area of each label.
@@ -5599,7 +5599,7 @@ $ astmkcatalog labeled.fits --ids --geoarea -h1 -ocat.fits
 ## Sort the table by the area column.
 $ asttable cat.fits --sort=AREA_FULL
 
-## The largest object, is the last one, so we'll use '--tail'.
+## The largest object, is the last one, so we will use '--tail'.
 $ asttable cat.fits --sort=AREA_FULL --tail=1
 
 ## We only want the ID, so let's only ask for that column:
@@ -5620,9 +5620,9 @@ $ astarithmetic labeled.fits $id eq -oonly-m51.fits
 @end example
 
 Open the image and have a look.
-To separate the outer edges of the detections, we'll need to ``erode'' the M51 
group detection.
-So in the same Arithmetic command as above, we'll erode three times (to have 
more pixels and thus less scatter), using a maximum connectivity of 2 
(8-connected neighbors).
-We'll then save the output in @file{eroded.fits}.
+To separate the outer edges of the detections, we will need to ``erode'' the 
M51 group detection.
+So in the same Arithmetic command as above, we will erode three times (to have 
more pixels and thus less scatter), using a maximum connectivity of 2 
(8-connected neighbors).
+We Will then save the output in @file{eroded.fits}.
 
 @example
 $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 erode \
@@ -5631,8 +5631,8 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 
erode \
 
 @noindent
 In @file{labeled.fits}, we can now set all the 1-valued pixels of 
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the 
previous command.
-We'll need the pixels of the M51 group in @code{labeled.fits} two times: once 
to do the erosion, another time to find the outer pixel layer.
-To do this (and be efficient and more readable) we'll use the @code{set-i} 
operator (to give this image the name `@code{i}').
+We Will need the pixels of the M51 group in @code{labeled.fits} two times: 
once to do the erosion, another time to find the outer pixel layer.
+To do this (and be efficient and more readable) we will use the @code{set-i} 
operator (to give this image the name `@code{i}').
 In this way we can use it any number of times afterwards, while only reading 
it from disk and finding M51's pixels once.
 
 @example
@@ -5648,14 +5648,14 @@ You can use @file{edge.fits} to mark (set to blank) 
this boundary on the input i
 $ astarithmetic r.fits -h0 edge.fits nan where -oedge-masked.fits
 @end example
 
-To quantify how deep we have detected the low-surface brightness regions (in 
units of signal to-noise ratio), we'll use the command below.
+To quantify how deep we have detected the low-surface brightness regions (in 
units of signal to-noise ratio), we will use the command below.
 In short it just divides all the non-zero pixels of @file{edge.fits} in the 
Sky subtracted input (first extension of NoiseChisel's output) by the pixel 
standard deviation of the same pixel.
 This will give us a signal-to-noise ratio image.
 The mean value of this image shows the level of surface brightness that we 
have achieved.
 You can also break the command below into multiple calls to Arithmetic and 
create temporary files to understand it better.
 However, if you have a look at @ref{Reverse polish notation} and 
@ref{Arithmetic operators}, you should be able to easily understand what your 
computer does when you run this command@footnote{@file{edge.fits} (extension 
@code{1}) is a binary (0 or 1 valued) image.
 Applying the @code{not} operator on it, just flips all its pixels (from 
@code{0} to @code{1} and vice-versa).
-Using the @code{where} operator, we are then setting all the newly 1-valued 
pixels (pixels that aren't on the edge) to NaN/blank in the sky-subtracted 
input image (@file{r_detected.fits}, extension @code{INPUT-NO-SKY}, which we 
call @code{skysub}).
+Using the @code{where} operator, we are then setting all the newly 1-valued 
pixels (pixels that are not on the edge) to NaN/blank in the sky-subtracted 
input image (@file{r_detected.fits}, extension @code{INPUT-NO-SKY}, which we 
call @code{skysub}).
 We are then dividing all the non-blank pixels (only those on the edge) by the 
sky standard deviation (@file{r_detected.fits}, extension @code{SKY_STD}, which 
we called @code{skystd}).
 This gives the signal-to-noise ratio (S/N) for each of the pixels on the 
boundary.
 Finally, with the @code{meanvalue} operator, we are taking the mean value of 
all the non-blank pixels and reporting that as a single number.}.
@@ -5671,7 +5671,7 @@ $ astarithmetic edge.fits -h1                  set-edge \
 We have thus detected the wings of the M51 group down to roughly 1/3rd of the 
noise level in this image which is a very good achievement!
 But the per-pixel S/N is a relative measurement.
 Let's also measure the depth of our detection in absolute surface brightness 
units; or magnitudes per square arc-seconds (see @ref{Brightness flux 
magnitude}).
-We'll also ask for the S/N and magnitude of the full edge we have defined.
+We Will also ask for the S/N and magnitude of the full edge we have defined.
 Fortunately doing this is very easy with Gnuastro's MakeCatalog:
 
 @example
@@ -5685,7 +5685,7 @@ $ asttable edge_cat.fits
 We have thus reached an outer surface brightness of @mymath{25.70} 
magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this 
single exposure SDSS image!
 This is very similar to the surface brightness limit measured in @ref{Image 
surface brightness limit} (which is a big achievement!).
 But another point in the result above is very interesting: the total S/N of 
the edge is @mymath{55.24} with a total edge magnitude@footnote{You can run 
MakeCatalog on @file{only-m51.fits} instead of @file{edge.fits} to see the full 
magnitude of the M51 group in this image.} of 15.90!!!
-This very large for such a faint signal (recall that the mean S/N per pixel 
was 0.32) and shows a very important point in the study of galaxies:
+This is very large for such a faint signal (recall that the mean S/N per pixel 
was 0.32) and shows a very important point in the study of galaxies:
 While the per-pixel signal in their outer edges may be very faint (and 
invisible to the eye in noise), a lot of signal hides deeply buried in the 
noise.
 
 In interpreting this value, you should just have in mind that NoiseChisel 
works based on the contiguity of signal in the pixels.
@@ -5696,7 +5696,7 @@ If the M51 group in this image was larger/smaller than 
this (the field of view w
 
 @node Extract clumps and objects,  , Achieved surface brightness level, 
Detecting large extended targets
 @subsection Extract clumps and objects (Segmentation)
-In @ref{NoiseChisel optimization} we found a good detection map over the 
image, so pixels harboring signal have been differentiated from those that 
don't.
+In @ref{NoiseChisel optimization} we found a good detection map over the 
image, so pixels harboring signal have been differentiated from those that do 
not.
 For noise-related measurements like the surface brightness limit, this is fine.
 However, after finding the pixels with signal, you are most likely interested 
in knowing the sub-structure within them.
 For example how many star forming regions (those bright dots along the spiral 
arms) of M51 are within this image?
@@ -5757,9 +5757,9 @@ $ chmod +x check-clumps.sh
 
 @cindex AWK
 @cindex GNU AWK
-This script doesn't expect the @file{.fits} suffix of the input's filename as 
the first argument.
+This script does not expect the @file{.fits} suffix of the input's filename as 
the first argument.
 Because the script produces intermediate files (a catalog and DS9 region file, 
which are later deleted).
-However, we don't want multiple instances of the script (on different files in 
the same directory) to collide (read/write to the same intermediate files).
+However, we do not want multiple instances of the script (on different files 
in the same directory) to collide (read/write to the same intermediate files).
 Therefore, we have used suffixes added to the input's name to identify the 
intermediate files.
 Note how all the @code{$1} instances in the commands (not within the AWK 
command@footnote{In AWK, @code{$1} refers to the first column, while in the 
shell script, it refers to the first argument.}) are followed by a suffix.
 If you want to keep the intermediate files, put a @code{#} at the start of the 
last line.
@@ -5782,8 +5782,8 @@ If you need your access to the command-line before 
closing DS9, add a @code{&} a
 @cindex Purity
 @cindex Completeness
 While DS9 is open, slide the dynamic range (values for black and white, or 
minimum/maximum values in different color schemes) and zoom into various 
regions of the M51 group to see if you are satisfied with the detected clumps.
-Don't forget that through the ``Cube'' window that is opened along with DS9, 
you can flip through the extensions and see the actual clumps also.
-The questions you should be asking your self are these: 1) Which real clumps 
(as you visually @emph{feel}) have been missed? In other words, is the 
@emph{completeness} good? 2) Are there any clumps which you @emph{feel} are 
false? In other words, is the @emph{purity} good?
+Do Not forget that through the ``Cube'' window that is opened along with DS9, 
you can flip through the extensions and see the actual clumps also.
+The questions you should be asking yourself are these: 1) Which real clumps 
(as you visually @emph{feel}) have been missed? In other words, is the 
@emph{completeness} good? 2) Are there any clumps which you @emph{feel} are 
false? In other words, is the @emph{purity} good?
 
 Note that completeness and purity are not independent of each other, they are 
anti-correlated: the higher your purity, the lower your completeness and 
vice-versa.
 You can see this by playing with the purity level using the @option{--snquant} 
option.
@@ -5797,7 +5797,7 @@ Do you have a good balance between completeness and 
purity? Also look out far in
 
 An easier way to inspect completeness (and only completeness) is to mask all 
the pixels detected as clumps and visually inspecting the rest of the pixels.
 You can do this using Arithmetic in a command like below.
-For easy reading of the command, we'll define the shell variable @code{i} for 
the image name and save the output in @file{masked.fits}.
+For easy reading of the command, we will define the shell variable @code{i} 
for the image name and save the output in @file{masked.fits}.
 
 @example
 $ in="r_segmented.fits -hINPUT"
@@ -5813,14 +5813,14 @@ Using a larger kernel can also help in detecting the 
existing clumps to fainter
 You can make any kernel easily using the @option{--kernel} option in 
@ref{MakeProfiles}.
 But note that a larger kernel is also going to wash-out many of the 
sharp/small clumps close to the center of M51 and also some smaller peaks on 
the wings.
 Please continue playing with Segment's configuration to obtain a more complete 
result (while keeping reasonable purity).
-We'll finish the discussion on finding true clumps at this point.
+We Will finish the discussion on finding true clumps at this point.
 
 The properties of the clumps within M51, or the background objects can then 
easily be measured using @ref{MakeCatalog}.
-To measure the properties of the background objects (detected as clumps over 
the diffuse region), you shouldn't mask the diffuse region.
+To measure the properties of the background objects (detected as clumps over 
the diffuse region), you should not mask the diffuse region.
 When measuring clump properties with @ref{MakeCatalog} and using the 
@option{--clumpscat}, the ambient flux (from the diffuse region) is calculated 
and subtracted.
 If the diffuse region is masked, its effect on the clump brightness cannot be 
calculated and subtracted.
 
-To keep this tutorial short, we'll stop here.
+To keep this tutorial short, we will stop here.
 See @ref{Segmentation and making a catalog} and @ref{Segment} for more on 
using Segment, producing catalogs with MakeCatalog and using those catalogs.
 
 
@@ -5853,7 +5853,7 @@ We will use an image of the M51 galaxy group in the r 
(SDSS) band of the Javalam
 For more information on J-PLUS, and its unique features visit: 
@url{http://www.j-plus.es}.
 
 First, let's download the image from the J-PLUS web page using @code{wget}.
-But to have a generalize-able, and easy to read command, we'll define some 
base variables (in all-caps) first.
+But to have a generalize-able, and easy to read command, we will define some 
base variables (in all-caps) first.
 After the download is complete, open the image with SAO DS9 (or any other FITS 
viewer you prefer!) to have a feeling of the data (and of course, enjoy the 
beauty of M51 in such a wide field of view):
 
 @example
@@ -5898,7 +5898,7 @@ Therefore, before going onto selecting stars, let's 
detect all significant signa
 
 There is a problem however: the saturated pixels of the bright stars are going 
to cause problems in the segmentation phase.
 To see this problem, let's make a @mymath{1000\times1000} crop around a bright 
star to speed up the test (and its solution).
-Afterwards we'll apply the solution to the whole image.
+Afterwards we will apply the solution to the whole image.
 
 @example
 $ astcrop flat/67510.fits --mode=wcs --widthinpix --width=1000 \
@@ -5924,7 +5924,7 @@ The saturation level is usually fixed for any survey or 
input data that you rece
 Let's make a smaller crop of @mymath{50\times50} pixels around the star with 
the first command below.
 With the next command, please look at the crop with DS9 to visually understand 
the problem.
 You will see the saturated pixels as the noisy red pixels in the center of the 
image.
-A non-saturated star will have a single pixel as the maximum and won't have a 
such a large area covered by a noisy constant value (find a few stars in the 
image and see for yourself).
+A non-saturated star will have a single pixel as the maximum and will not have 
a such a large area covered by a noisy constant value (find a few stars in the 
image and see for yourself).
 Visual and qualitative inspection of the process is very important for 
understanding the solution.
 
 @example
@@ -5958,7 +5958,7 @@ If there was no saturation, the number of pixels should 
have decreased at increa
 But that is not the case here.
 Please try this experiment on a non-saturated (fainter) star to see what we 
mean.
 
-If you still haven't experimented on a non-saturated star, please stop reading 
this tutorial!
+If you still have not experimented on a non-saturated star, please stop 
reading this tutorial!
 Please open @file{flat/67510.fits} in DS9, select a fainter/smaller star and 
repeat the last three commands (with a different center).
 After you have confirmed the point above (visually, and with the histogram), 
please continue with the rest of this tutorial.
 
@@ -6007,7 +6007,7 @@ Histogram:
 
 The peak at the large end of the histogram has gone!
 But let's have a closer look at the values (the resolution of an ASCII 
histogram is limited!).
-To do this, we'll ask Statistics to save the histogram into a table with the 
@option{--histogram} option, then look at the last 20 rows:
+To do this, we will ask Statistics to save the histogram into a table with the 
@option{--histogram} option, then look at the last 20 rows:
 
 @example
 $ aststatistics sat-center.fits --greaterequal=100 --lessthan=2500 \
@@ -6053,7 +6053,7 @@ Zoom into each of the peaks you see.
 Besides the central very bright one that we were looking at closely until now, 
only one other star is saturated (its center is NaN, or Not-a-Number).
 Try to find it.
 
-But we aren't done yet!
+But we are not done yet!
 Please zoom-in to that central bright star and have another look on the edges 
of the vertical ``bleeding'' saturated pixels, there are strong 
positive/negative values touching it (almost like ``waves'').
 These will also cause problems and have to be masked!
 So with a small addition to the previous command, let's dilate the saturated 
regions (with 2-connectivity, or 8-connected neighbors) four times and have 
another look:
@@ -6106,11 +6106,11 @@ $ ds9 -mecube sat-seg.fits -zoom to fit -scale limits 
-1 1
 @noindent
 See the @code{CLUMPS} extension.
 Do you see how the whole center of the star has indeed been identified as a 
single clump?
-We thus achieved our aim and didn't let the saturated pixels harm the 
identification of the center!
+We thus achieved our aim and did not let the saturated pixels harm the 
identification of the center!
 
 If the issue was only clumps (like in a normal deep image processing), this 
was the end of Segment's special considerations.
 However, in the scenario here, with the very extended wings of the bright 
stars, it usually happens that background objects become ``clumps'' in the 
outskirts and will rip the bright star outskirts into separate ``objects''.
-In the next section (@ref{One object for the whole detection}), we'll describe 
how you can modify Segment to avoid this issue.
+In the next section (@ref{One object for the whole detection}), we will 
describe how you can modify Segment to avoid this issue.
 
 
 
@@ -6124,7 +6124,7 @@ In the next section (@ref{One object for the whole 
detection}), we'll describe h
 @node One object for the whole detection, Building outer part of PSF, 
Saturated pixels and Segment's clumps, Building the extended PSF
 @subsection One object for the whole detection
 
-In @ref{Saturated pixels and Segment's clumps}, we described how you can run 
Segment such that saturated pixels don't interfere with its clumps.
+In @ref{Saturated pixels and Segment's clumps}, we described how you can run 
Segment such that saturated pixels do not interfere with its clumps.
 However, due to the very extended wings of the PSF, the default definition of 
``objects'' should also be modified for the scenario here.
 To better see the problem, let's inspect now the @code{OBJECTS} extension, 
focusing on those objects with a label between 50 to 150 (which include the 
main star):
 
@@ -6134,9 +6134,9 @@ $ astscript-fits-view sat-seg.fits -hOBJECTS 
--ds9scale="limits 50 150"
 
 We can see that the detection corresponding to the star has been broken into 
different objects.
 This is not a good object segmentation image for our scenario here.
-Since those objects in the outer wings of the bright star's detection harbor a 
lo of the extended PSF.
+Since those objects in the outer wings of the bright star's detection harbor a 
lot of the extended PSF.
 We want to keep them with the same ``object'' label as the star (we only need 
to mask the ``clumps'' of the background sources).
-To do this, we'll make the following changes to Segment's options (see 
@ref{Segmentation options} for more on this options):
+To do this, we will make the following changes to Segment's options (see 
@ref{Segmentation options} for more on this options):
 @itemize
 @item
 Since we want the extended diffuse flux of the PSF to be taken as a single 
object, we want all the grown clumps to touch.
@@ -6144,7 +6144,7 @@ Therefore, it is necessary to decrease @option{--gthresh} 
to very low values, li
 Recall that its value is in units of the input standard deviation, so 
@option{--gthresh=-10} corresponds to @mymath{-10\sigma}.
 The default value is not for such extended sources that dominate all 
background sources.
 @item
-Since we want all connected grown clumps to be counted as a single object in 
any case, we'll set @option{--objbordersn=0} (its smallest possible value).
+Since we want all connected grown clumps to be counted as a single object in 
any case, we will set @option{--objbordersn=0} (its smallest possible value).
 @end itemize
 
 @noindent
@@ -6166,10 +6166,10 @@ See @ref{Detecting large extended targets} for more on 
optimizing NoiseChisel.
 Since the image is so large, we have increased @option{--interpnumngb} to get 
better outlier statistics on the tiles.
 The default value is primarily for small images, so this is usually the first 
thing you should do when running NoiseChisel on a real/large image.
 @item
-Since the image is not too deep (made from few exposures), it doesn't have 
strong correlated noise, so we'll decrease @option{--detgrowquant} and increase 
@option{--detgrowmaxholesize} to better extract signal.
+Since the image is not too deep (made from few exposures), it does not have 
strong correlated noise, so we will decrease @option{--detgrowquant} and 
increase @option{--detgrowmaxholesize} to better extract signal.
 @end itemize
 @noindent
-Furthermore, since both NoiseChisel and Segment need a convolved image, we'll 
do the convolution before and feed it to both (to save running time).
+Furthermore, since both NoiseChisel and Segment need a convolved image, we 
will do the convolution before and feed it to both (to save running time).
 But in the first command below, let's delete all the temporary files we made 
above.
 
 Since the image is large (+300 MB), to avoid wasting storage, any temporary 
file that is no longer necessary for later processing is deleted after it is 
used.
@@ -6204,13 +6204,13 @@ $ rm label/67510-fill-conv.fits
 $ astscript-fits-view label/67510-seg-raw.fits
 @end example
 
-We see that the saturated pixels haven't caused any problem and the central 
clumps/objects of bright stars are now a single clump/object.
+We see that the saturated pixels have not caused any problem and the central 
clumps/objects of bright stars are now a single clump/object.
 We can now proceed to estimating the outer PSF.
 But before that, let's make a ``standard'' segment output: one that can safely 
be fed into MakeCatalog for measurements and can contain all necessary outputs 
of this whole process in a single file (as multiple extensions).
 
 The main problem is again the saturated pixels: we interpolated them to be the 
maximum of their nearby pixels.
 But this will cause problems in any measurement that is done over those 
regions.
-To let MakeCatalog know that those pixels shouldn't be used, the first 
extension of the file given to MakeCatalog should have blank values on those 
pixels.
+To let MakeCatalog know that those pixels should not be used, the first 
extension of the file given to MakeCatalog should have blank values on those 
pixels.
 We will do this with the commands below:
 
 @example
@@ -6234,7 +6234,7 @@ $ rm label/67510-masked-sat.fits label/67510-nc.fits \
 @end example
 
 @noindent
-You can now simply run MakeCatalog on this image and be sure that saturated 
pixels won't affect the measurements.
+You can now simply run MakeCatalog on this image and be sure that saturated 
pixels will not affect the measurements.
 As one example, you can use MakeCatalog to find the clumps containing 
saturated pixels: recall that the @option{--area} column only calculates the 
area of non-blank pixels, while @option{--geoarea} calculates the area of the 
label (independent of their blank-ness in the values image):
 
 @example
@@ -6276,7 +6276,7 @@ In @ref{Preparing input for extended PSF}, we described 
how to create a Segment
 We are now ready to start building the extended PSF.
 
 First we will build the outer parts of the PSF, so we want the brightest stars.
-You will see we have several bright stars in this very large field of view, 
but we don't yet have a feeling how many they are, and at what magnitudes.
+You will see we have several bright stars in this very large field of view, 
but we do not yet have a feeling how many they are, and at what magnitudes.
 So let's use Gnuastro's Query program to find the magnitudes of the brightest 
stars (those brighter than g-magnitude 10 in Gaia early data release 3, or 
eDR3).
 For more on Query, see @ref{Query}.
 
@@ -6317,7 +6317,7 @@ $ astscript-ds9-region outer/67510-6-10.fits -cra,dec \
 @end example
 
 Now that the catalog of good stars is ready, it is time to construct the 
individual stamps from the catalog above.
-To do that, we'll use @file{astscript-psf-stamp}.
+To do that, we will use @file{astscript-psf-stamp}.
 One of the most important parameters for this script is the normalization 
radii @code{--normradii}.
 This parameter defines a ring for the flux normalization of each star stamp.
 The normalization of the flux is necessary because each star has a different 
brightness, and consequently, it is crucial for having all the stamps with the 
same flux level in the same region.
@@ -6330,8 +6330,8 @@ To do that, let's use two useful parameters that will 
help us in the checking of
 @item
 With @code{--tmpdir=checking-normradii} all temporary files, including the 
radial profiles, will be save in that directory (instead of an 
internally-created name).
 @item
-With @code{--keeptmp} we won't remove the temporal files, so it is possible to 
have a look at them (by default the temporary directory gets deleted at the 
end).
-It is necessary to specify the @code{--normradii} even if we don't know yet 
the final values.
+With @code{--keeptmp} we will not remove the temporal files, so it is possible 
to have a look at them (by default the temporary directory gets deleted at the 
end).
+It is necessary to specify the @code{--normradii} even if we do not know yet 
the final values.
 Otherwise the script will not generate the radial profile.
 @end itemize
 
@@ -6371,7 +6371,7 @@ $ astscript-fits-view 
finding-normradii/cropped-masked*.fits
 @noindent
 If everything looks good in the image, let's open all the radial profiles and 
visually check those with the command below.
 Note that @command{astscript-fits-view} calls the @command{topcat} graphic 
user interface (GUI) program to visually inspect (plot) tables.
-If you don't already have it, see @ref{TOPCAT}.
+If you do not already have it, see @ref{TOPCAT}.
 
 @example
 $ astscript-fits-view finding-normradii/rprofile*.fits
@@ -6416,7 +6416,7 @@ $ astarithmetic $imgs $numimgs 3 0.2 sigclip-mean -g1 \
 @noindent
 Did you notice the @option{--wcsfile=none} option above?
 With it, the stacked image no longer has any WCS information.
-This is natural, because the stacked image doesn't correspond to any specific 
region of the sky any more.
+This is natural, because the stacked image does not correspond to any specific 
region of the sky any more.
 
 Let's compare this stacked PSF with the images of the individual stars that 
were used to create it.
 You can clearly see that the number of masked pixels is significantly 
decreased and the PSF is much more ``cleaner''.
@@ -6426,7 +6426,7 @@ $ astscript-fits-view outer/stack.fits outer/stamps/*.fits
 @end example
 
 However, the saturation in the center still remains.
-Also, because we didn't have too many images, some regions still are very 
noisy.
+Also, because we did not have too many images, some regions still are very 
noisy.
 If we had more bright stars in our selected magnitude range, we could have 
filled those outer remaining patches.
 In a large survey like J-PLUS (that we are using here), you can simply look 
into other fields that were observed soon before/after the image ID 67510 that 
we used here (to have a similar PSF) and get more stars in those images to add 
to these.
 In fact, the J-PLUS DR2 image ID of the field above was intentionally 
preserved during the steps above to show how easy it is to use images from 
other fields and blend them all into the output PSF.
@@ -6444,7 +6444,7 @@ In fact, the J-PLUS DR2 image ID of the field above was 
intentionally preserved
 @subsection Inner part of the PSF
 
 In @ref{Building outer part of PSF}, we were able to create a stack of the 
outer-most behavior of the PSF in a J-PLUS survey image.
-But the central part that was affected by saturation and non-linearity is 
still remaining, and we still don't have a ``complete'' PSF!
+But the central part that was affected by saturation and non-linearity is 
still remaining, and we still do not have a ``complete'' PSF!
 In this section, we will use the same steps before to make stacks of more 
inner regions of the PSF to ultimately unite them all into a single PSF in 
@ref{Uniting the different PSF components}.
 
 For the outer PSF, we selected stars in the magnitude range of 6 to 10.
@@ -6461,7 +6461,7 @@ $ astscript-ds9-region inner/67510-12-13.fits -cra,dec \
            --command="ds9 flat/67510.fits -zoom to fit -zscale"
 @end example
 
-We have 41 stars, but if you zoom into their centers, you will see that they 
don't have any major bleeding-vertical saturation any more.
+We have 41 stars, but if you zoom into their centers, you will see that they 
do not have any major bleeding-vertical saturation any more.
 Only the very central core of some of the stars is saturated.
 We can therefore use these stars to fill the strong bleeding footprints that 
were present in the outer stack of @file{outer/stack.fits}.
 Similar to before, let's build ready-to-stack crops of these stars.
@@ -6520,7 +6520,7 @@ $ astscript-fits-view outer/profile.fits 
inner/profile.fits
 
 From the visual inspection of the images and the radial profiles, it is clear 
that we have saturation in the center for the outer part.
 Note that the absolute flux values of the PSFs are meaningless since they 
depend on the normalization radii we used to obtain them.
-The uniting step consists in scaling up (or down) the inner part of the PSF to 
have the same flux at at the junction radius, and then, use that flux-scaled 
inner part to fill the center of the outer PSF.
+The uniting step consists in scaling up (or down) the inner part of the PSF to 
have the same flux at the junction radius, and then, use that flux-scaled inner 
part to fill the center of the outer PSF.
 To get a feeling of the process, first, let's open the two radial profiles and 
find the factor manually first:
 
 @enumerate
@@ -6538,7 +6538,7 @@ A new window will open with the plot of the first two 
columns: @code{RADIUS} on
 The rest of the steps are done in this window.
 @item
 In the bottom settings, within the left panel, click on the ``Axes'' item.
-This will allow customization of the plot axises.
+This will allow customization of the plot axes.
 @item
 In the bottom-right panel, click on the box in front of ``Y Log'' to make the 
vertical axis logarithmic-scaled.
 @item
@@ -6553,7 +6553,7 @@ On the bottom-right panel, in front of @code{Y:}, you 
will see @code{MEAN}.
 Click in the white-space after it, and type this: @code{*100}.
 This will display the @code{MEAN} column of the inner profile, after 
multiplying it by 100.
 Afterwards, you will see that the inner profile (blue) matches more cleanly 
with the outer (red); especially in the smaller radii.
-At larger radii, it doesn't drop like the red plot.
+At larger radii, it does not drop like the red plot.
 This is because of the extremely low signal-to-noise ratio at those regions in 
the fainter stars used to make this stack.
 @item
 Take your mouse cursor over the profile, in particular over the bump around a 
radius of 100 pixels.
@@ -6769,7 +6769,7 @@ This terrible noise-level is what you clearly see as the 
footprint of the PSF.
 
 To confirm this, let's use the commands below to subtract the faintest of the 
bright-stars catalog (note the use of @option{--tail} when finding the central 
position).
 You will notice that the scale factor (@mymath{\sim1.3}) is now smaller than 3.
-So when we multiply the PSF with this factor, the PSF's noise level is lower 
than our input image and we shouldn't see any footprint like before.
+So when we multiply the PSF with this factor, the PSF's noise level is lower 
than our input image and we should not see any footprint like before.
 Note also that we are using a larger zoom factor, because this star is smaller 
in the image.
 
 @example
@@ -6796,7 +6796,7 @@ $ astscript-fits-view label/67510-seg.fits 
single-star/subtracted.fits \
            --ds9center=$center --ds9mode=wcs --ds9extra="-zoom 10"
 @end example
 
-In a large survey like J-PLUS, its easy to use more and more bright stars from 
different pointings (ideally with similar FWHM and similar telescope 
properties@footnote{For example in J-PLUS, the baffle of the secondary mirror 
was adjusted in 2017 because it produced extra spikes in the PSF. So all images 
after that date have a PSF with 4 spikes (like this one), while those before it 
have many more spikes.}) to improve the S/N of the PSF.
+In a large survey like J-PLUS, it is easy to use more and more bright stars 
from different pointings (ideally with similar FWHM and similar telescope 
properties@footnote{For example in J-PLUS, the baffle of the secondary mirror 
was adjusted in 2017 because it produced extra spikes in the PSF. So all images 
after that date have a PSF with 4 spikes (like this one), while those before it 
have many more spikes.}) to improve the S/N of the PSF.
 As explained before, we designed the output files of this tutorial with the 
@code{67510} (which is this image's pointing label in J-PLUS) where necessary 
so you see how easy it is to add more pointings to use in the creation of the 
PSF.
 
 Let's consider now more than one single star.
@@ -6811,8 +6811,8 @@ So we should sort the catalog by brightness and come down 
from the brightest.
 We should only subtract stars where the scale factor is less than the S/N of 
the PSF (in relation to the data).
 @end itemize
 
-Since it can get a little complex, its easier to implement this step as a 
script (that is heavily commented for you to easily understand every step; 
especially if you put it in a good text editor with color-coding!).
-You will notice that script also creates a @file{.log} file, which shows which 
star was subtracted and which one wasn't (this is important, and will be used 
below!).
+Since it can get a little complex, it is easier to implement this step as a 
script (that is heavily commented for you to easily understand every step; 
especially if you put it in a good text editor with color-coding!).
+You will notice that script also creates a @file{.log} file, which shows which 
star was subtracted and which one was not (this is important, and will be used 
below!).
 
 @example
 #!/bin/bash
@@ -6876,7 +6876,7 @@ asttable flat/67510-bright.fits -cra,dec --sort 
phot_g_mean_mag  \
         # Keep the status for this star.
         status=1
     else
-        # Let the user know this star didn't work, and keep the status
+        # Let the user know this star did not work, and keep the status
         # for this star.
         echo "$center: $scale is larger than $snlevel"
         status=0
@@ -6888,7 +6888,7 @@ done
 @end example
 
 Copy the contents above into a file called @file{subtract-psf-from-cat.sh} and 
run the following commands.
-Just note that in the script above, we assumed the output is written in the 
@file{subtracted/}, directory, so we'll first make that.
+Just note that in the script above, we assumed the output is written in the 
@file{subtracted/}, directory, so we will first make that.
 
 @example
 $ mkdir subtracted
@@ -6899,10 +6899,10 @@ $ astscript-fits-view label/67510-seg.fits 
subtracted/67510.fits
 @end example
 
 Can you visually find the stars that have been subtracted?
-Its a little hard, isn't it?
+Its a little hard, is not it?
 This shows that you done a good job this time (the sky-noise is not 
significantly affected)!
 So let's subtract the actual image from the PSF-subtracted image to see the 
scattered light field of the subtracted stars.
-With the second command below we'll zoom into the brightest subtracted star, 
but of course feel free to zoom-out and inspect the others also.
+With the second command below we will zoom into the brightest subtracted star, 
but of course feel free to zoom-out and inspect the others also.
 
 @example
 $ astarithmetic label/67510-seg.fits subtracted/67510.fits - \
@@ -6928,12 +6928,12 @@ To solve such inconvenience, this script also has an 
option to not make the subt
 For doing that, it is necessary to run the script with the option 
@option{--modelonly}.
 We encourage the reader to obtain such scattered light field model.
 In some scenarios it could be interesting having such way of correcting the 
PSF.
-For example, if there are many faint stars that can be modeled at the same 
time because their flux don't affect each other.
+For example, if there are many faint stars that can be modeled at the same 
time because their flux do not affect each other.
 In such situation, the task could be easily parallelized without having to 
wait to model the brighter stars before the fainter ones.
 At the end, once all stars have been modeled, a simple Arithmetic command 
could be used to sum the different modeled-PSF stamps to obtain the entire 
scattered light field.
 
 In general you see that the subtraction has been done nicely and almost all 
the extended wings of the PSF have been subtracted.
-The central regions of the stars aren't perfectly subtracted:
+The central regions of the stars are not perfectly subtracted:
 
 @itemize
 @item
@@ -6945,7 +6945,7 @@ Others may have a strong gradient: one side is too 
positive and one side is too
 This is due to the non-accurate positioning: most probably this happens 
because of imperfect astrometry.
 @end itemize
 
-Note also that during this process we assumed that the PSF doesn't vary with 
the CCD position or any other parameter.
+Note also that during this process we assumed that the PSF does not vary with 
the CCD position or any other parameter.
 In other words, we are obtaining an averaged PSF model from a few star stamps 
that are naturally different, and this also explains the residuals on each 
subtracted star.
 
 We let as an interesting exercise the modeling and subtraction of other stars, 
for example, the non saturated stars of the image.
@@ -6972,11 +6972,11 @@ Each image or dataset will have its own particularities 
that you will have to ta
 It is the year 953 A.D. and Abd al-rahman Sufi (903 -- 986 A.D.)@footnote{In 
Latin Sufi is known as Azophi.
 He was an Iranian astronomer.
 His manuscript ``Book of fixed stars'' contains the first recorded 
observations of the Andromeda galaxy, the Large Magellanic Cloud and seven 
other non-stellar or `nebulous' objects.}  is in Shiraz as a guest astronomer.
-He had come there to use the advanced 123 centimeter astrolabe for his studies 
on the Ecliptic.
+He had come there to use the advanced 123 centimeter astrolabe for his studies 
on the ecliptic.
 However, something was bothering him for a long time.
 While mapping the constellations, there were several non-stellar objects that 
he had detected in the sky, one of them was in the Andromeda constellation.
 During a trip he had to Yemen, Sufi had seen another such object in the 
southern skies looking over the Indian ocean.
-He wasn't sure if such cloud-like non-stellar objects (which he was the first 
to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or 
if they were only the result of some bias in his observations.
+He was not sure if such cloud-like non-stellar objects (which he was the first 
to call `Sah@={a}bi' in Arabic or `nebulous') were real astronomical objects or 
if they were only the result of some bias in his observations.
 Could such diffuse objects actually be detected at all with his detection 
technique?
 
 @cindex Almagest
@@ -6984,7 +6984,7 @@ Could such diffuse objects actually be detected at all 
with his detection techni
 @cindex Ptolemy, Claudius
 He still had a few hours left until nightfall (when he would continue his 
studies on the ecliptic) so he decided to find an answer to this question.
 He had thoroughly studied Claudius Ptolemy's (90 -- 168 A.D) Almagest and had 
made lots of corrections to it, in particular in measuring the brightness.
-Using his same experience, he was able to measure a magnitude for the objects 
and wanted to simulate his observation to see if a simulated object with the 
same brightness and size could be detected in a simulated noise with the same 
detection technique.
+Using his same experience, he was able to measure a magnitude for the objects 
and wanted to simulate his observation to see if a simulated object with the 
same brightness and size could be detected in simulated noise with the same 
detection technique.
 The general outline of the steps he wants to take are:
 
 @enumerate
@@ -7016,8 +7016,8 @@ After all, all observations have noise associated with 
them.
 
 Fortunately Sufi had heard of GNU Astronomy Utilities from a colleague in 
Isfahan (where he worked) and had installed it on his computer a year before.
 It had tools to do all the steps above.
-He had used MakeProfiles before, but wasn't sure which columns he had chosen 
in his user or system wide configuration files for which parameters, see 
@ref{Configuration files}.
-So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option 
to make sure what columns in a catalog MakeProfiles currently recognizes and 
the output image parameters.
+He had used MakeProfiles before, but was not sure which columns he had chosen 
in his user or system-wide configuration files for which parameters, see 
@ref{Configuration files}.
+So to start his simulation, Sufi runs MakeProfiles with the @option{-P} option 
to make sure what columns in a catalog MakeProfiles currently recognizes, and 
confirm the output image parameters.
 In particular, Sufi is interested in the recognized columns (shown below).
 
 @example
@@ -7059,7 +7059,7 @@ For his initial check he decides to simulate the nebula 
in the Andromeda constel
 The night he was observing, the PSF had roughly a FWHM of about 5 pixels, so 
as the first row (profile) in the table below, he defines the PSF parameters.
 Sufi sets the radius column (@code{rcol} above, fifth column) to @code{5.000}, 
he also chooses a Moffat function for its functional form.
 Remembering how diffuse the nebula in the Andromeda constellation was, he 
decides to simulate it with a mock S@'{e}rsic index 1.0 profile.
-He wants the output to be 499 pixels by 499 pixels, so he can put the center 
of the mock profile in the central pixel of the image which is the 250th pixel 
along both dimensions (note that an even number doesn't have a ``central'' 
pixel).
+He wants the output to be 499 pixels by 499 pixels, so he can put the center 
of the mock profile in the central pixel of the image which is the 250th pixel 
along both dimensions (note that an even number does not have a ``central'' 
pixel).
 
 Looking at his drawings of it, he decides a reasonable effective radius for it 
would be 40 pixels on this image pixel scale (second row, 5th column below).
 He also sets the axis ratio (0.4) and position angle (-25 degrees) to 
approximately correct values too, and finally he sets the total magnitude of 
the profile to 3.44 which he had measured.
@@ -7089,7 +7089,7 @@ So you should never use naked data (or data without any 
extra information).
 Data (or information) that describes other data is called ``metadata''!
 One common example is column names (the name of a column is itself a data 
element, but data that describes the lower-level data within that column: how 
to interpret the numbers within it).
 Sufi explains to his student that Gnuastro has a convention for adding 
metadata within a plain-text file; and guides him to @ref{Gnuastro text table 
format}.
-Because we don't want metadata to be confused with the actual data, in a 
plain-text file, we start lines containing metadata with a `@code{#}'.
+Because we do not want metadata to be confused with the actual data, in a 
plain-text file, we start lines containing metadata with a `@code{#}'.
 For example, see the same data above, but this time with metadata for every 
column:
 
 @example
@@ -7112,7 +7112,7 @@ For example, see the same data above, but this time with 
metadata for every colu
 @end example
 
 @noindent
-The numbers now make much more sense to for the student!
+The numbers now make much more sense for the student!
 Before continuing, Sufi reminded the student that even though metadata may not 
be strictly/technically necessary (for the computer programs), metadata are 
critical for human readers!
 Therefore, a good scientist should never forget to keep metadata with any data 
that they create, use or archive.
 
@@ -7130,7 +7130,7 @@ $ pwd
 @cindex Standard output
 It is possible to use a plain-text editor to manually put the catalog contents 
above into a plain-text file.
 But to easily automate catalog production (in later trials), Sufi decides to 
fill the input catalog with the redirection features of the command-line (or 
shell).
-Sufi's student wasn't familiar with this feature of the shell!
+Sufi's student was not familiar with this feature of the shell!
 So Sufi decided to do a fast demo; giving the following explanations while 
running the commands:
 
 Shell redirection allows you to ``re-direct'' the ``standard output'' of a 
program (which is usually printed by the program on the command-line during its 
execution; like the output of @command{pwd} above) into a file.
@@ -7149,7 +7149,7 @@ To redirect this sentence into a file (instead of simply 
printing it on the stan
 $ echo "This is a test." > test.txt
 @end example
 
-This time, the @command{echo} command didn't print anything in the terminal.
+This time, the @command{echo} command did not print anything in the terminal.
 Instead, the shell (command-line environment) took the output, and 
``re-directed'' it into a file called @file{test.txt}.
 Let's confirm this with the @command{ls} command (@command{ls} is short for 
``list'' and will list all the files in the current directory):
 
@@ -7167,7 +7167,7 @@ This is a test.
 @end example
 
 @noindent
-Now that we have written our first line in @file{test.txt}, let's try adding a 
second line (don't forget that our final catalog of objects to simulate will 
have multiple lines):
+Now that we have written our first line in @file{test.txt}, let's try adding a 
second line (do not forget that our final catalog of objects to simulate will 
have multiple lines):
 
 @example
 $ echo "This is my second line." > test.txt
@@ -7179,8 +7179,8 @@ As you see, the first line that you put in the file is no 
longer present!
 This happens because `@code{>}' always starts dumping content to a file from 
the start of the file.
 In effect, this means that any possibly pre-existing content is over-written 
by the new content!
 To append new lines (or dumping new content at the end of existing content), 
you can use `@code{>>}'.
-For example with the commands below, first we'll write the first sentence 
(using `@code{>}'), then use `@code{>>}' to add the second and third sentences.
-Finally, we'll print the contents of @file{test.txt} to confirm that all three 
lines are preserved.
+For example with the commands below, first we will write the first sentence 
(using `@code{>}'), then use `@code{>>}' to add the second and third sentences.
+Finally, we will print the contents of @file{test.txt} to confirm that all 
three lines are preserved.
 
 @example
 $ echo "My first sentence."   > test.txt
@@ -7231,7 +7231,7 @@ To make sure if the catalog's content is correct (and 
there was no typo for exam
 
 @cindex Zero point
 Now that the catalog is created, Sufi is ready to call MakeProfiles to build 
the image containing these objects.
-He looks into his records and finds that the zero point magnitude for that 
night and that detector 18.
+He looks into his records and finds that the zero point magnitude for that 
night, and that particular detector, was 18 mangnitudes.
 The student was a little confused on the concept of zero point, so Sufi 
pointed him to @ref{Brightness flux magnitude}, which the student can study in 
detail later.
 Sufi therefore runs MakeProfiles with the command below:
 
@@ -7296,13 +7296,13 @@ Recall that when you oversample the model (for example 
by 5 times), for every de
 Sufi then explained that after convolving (next step below) we will 
down-sample the image to get our originally desired size/resolution.
 
 After seeing the image, the student complained that only the large elliptical 
model for the Andromeda nebula can be seen in the center.
-He couldn't see the four stars that we had also requested in the catalog.
-So Sufi had to explain that the stars are there in the image, but the reason 
that they aren't visible when looking at the whole image at once, is that they 
only cover a single pixel!
+He could not see the four stars that we had also requested in the catalog.
+So Sufi had to explain that the stars are there in the image, but the reason 
that they are not visible when looking at the whole image at once, is that they 
only cover a single pixel!
 To prove it, he centered the image around the coordinates 2308 and 2308, where 
one of the stars is located in the over-sampled image [you can do this in 
@command{ds9} by selecting ``Pan'' in the ``Edit'' menu, then clicking around 
that position].
 Sufi then zoomed in to that region and soon, the star's non-zero pixel could 
be clearly seen.
 
 Sufi explained that the stars will take the shape of the PSF (cover an area of 
more than one pixel) after convolution.
-If we didn't have an atmosphere and we didn't need an aperture, then stars 
would only cover a single pixel with normal CCD resolutions.
+If we did not have an atmosphere and we did not need an aperture, then stars 
would only cover a single pixel with normal CCD resolutions.
 So Sufi convolved the image with this command:
 
 @example
@@ -7407,9 +7407,9 @@ $ astscript-fits-view out.fits
 
 @noindent
 The @file{out.fits} file now contains the noised image of the mock catalog 
Sufi had asked for.
-The student hadn't observed the nebula in the sky, so when he saw the mock 
image in SAO DS9 (with the second command above), he understood why Sufi was 
dubious: it was very diffuse!
+The student had not observed the nebula in the sky, so when he saw the mock 
image in SAO DS9 (with the second command above), he understood why Sufi was 
dubious: it was very diffuse!
 
-Seeing how the @option{--output} option allows the user to specify the name of 
the output file, the student was confused and wanted to know why Sufi hadn't 
used it more regularly before?
+Seeing how the @option{--output} option allows the user to specify the name of 
the output file, the student was confused and wanted to know why Sufi had not 
used it more regularly before?
 Sufi explained that for intermediate steps, you can rely on the automatic 
output of the programs (see @ref{Automatic output}).
 Doing so will give all the intermediate files a similar basic name structure, 
so in the end you can simply remove them all with the Shell's capabilities, and 
it will be familiar for other users.
 So Sufi decided to show this to the student by making a shell script from the 
commands he had used before.
@@ -7464,7 +7464,7 @@ He used this chance to remind the student of the 
importance of comments in code
 Just like metadata in a dataset, when writing the code, you have a good mental 
picture of what you are doing, so writing comments might seem superfluous and 
excessive.
 However, in one month when you want to re-use the script, you have lost that 
mental picture and remembering it can be time-consuming and frustrating.
 The importance of comments is further amplified when you want to share the 
script with a friend/colleague.
-So it is good to accompany any step of a script, or code, with useful comments 
while you are writing it (create a good mental picture of why you are doing 
something: don't just describe the command, but its purpose).
+So it is good to accompany any step of a script, or code, with useful comments 
while you are writing it (create a good mental picture of why you are doing 
something: do not just describe the command, but its purpose).
 
 @cindex Gedit
 @cindex GNU Emacs
@@ -7489,10 +7489,10 @@ $ ./mymock.sh
 After the script finished, the only file remaining is the @file{out.fits} file 
that Sufi had wanted in the beginning.
 Sufi then explained to the student how he could run this script anywhere that 
he has a catalog if the script is in the same directory.
 The only thing the student had to modify in the script was the name of the 
catalog (the value of the @code{base} variable in the start of the script) and 
the value to the @code{edge} variable if he changed the PSF size.
-The student was also happy to hear that he won't need to make it executable 
again when he makes changes later, it will remain executable unless he 
explicitly changes the executable flag with @command{chmod}.
+The student was also happy to hear that he will not need to make it executable 
again when he makes changes later, it will remain executable unless he 
explicitly changes the executable flag with @command{chmod}.
 
 The student was really excited, since now, through simple shell scripting, he 
could really speed up his work and run any command in any fashion he likes 
allowing him to be much more creative in his works.
-Until now he was using the graphical user interface which doesn't have such a 
facility and doing repetitive things on it was really frustrating and some 
times he would make mistakes.
+Until now he was using the graphical user interface which does not have such a 
facility and doing repetitive things on it was really frustrating and some 
times he would make mistakes.
 So he left to go and try scripting on his own computer.
 He later reminded Sufi that the second tutorial in the Gnuastro book as more 
complex commands in data analysis, and a more advanced introduction to 
scripting (see @ref{General program usage tutorial}).
 
@@ -7540,7 +7540,7 @@ The latest released version of Gnuastro source code is 
always available at the f
 
 @noindent
 @ref{Quick start} describes the commands necessary to configure, build, and 
install Gnuastro on your system.
-This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example your system lacks a mandatory/optional dependency (in 
other words, you can't pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
+This chapter will be useful in cases where the simple procedure above is not 
sufficient, for example your system lacks a mandatory/optional dependency (in 
other words, you cannot pass the @command{$ ./configure} step), or you want 
greater customization, or you want to build and install Gnuastro from other 
random points in its history, or you want a higher level of control on the 
installation.
 Thus if you were happy with downloading the tarball and following @ref{Quick 
start}, then you can safely ignore this chapter and come back to it in the 
future if you need more customization.
 
 @ref{Dependencies} describes the mandatory, optional and bootstrapping 
dependencies of Gnuastro.
@@ -7576,7 +7576,7 @@ If you are installing from a tarball as explained in 
@ref{Quick start}, you can
 If you are cloning the version controlled source (see @ref{Version controlled 
source}), an additional bootstrapping step is required before configuration and 
its dependencies are explained in @ref{Bootstrapping dependencies}.
 
 Your operating system's package manager is an easy and convenient way to 
download and install the dependencies that are already pre-built for your 
operating system.
-In @ref{Dependencies from package managers}, we'll list some common operating 
system package manager commands to install the optional and mandatory 
dependencies.
+In @ref{Dependencies from package managers}, we will list some common 
operating system package manager commands to install the optional and mandatory 
dependencies.
 
 @menu
 * Mandatory dependencies::      Gnuastro will not install without these.
@@ -7591,7 +7591,7 @@ In @ref{Dependencies from package managers}, we'll list 
some common operating sy
 @cindex Dependencies, Gnuastro
 @cindex GNU build system
 The mandatory Gnuastro dependencies are very basic and low-level tools.
-They all follow the same basic GNU based build system (like that shown in 
@ref{Quick start}), so even if you don't have them, installing them should be 
pretty straightforward.
+They all follow the same basic GNU based build system (like that shown in 
@ref{Quick start}), so even if you do not have them, installing them should be 
pretty straightforward.
 In this section we explain each program and any specific note that might be 
necessary in the installation.
 
 
@@ -7634,10 +7634,10 @@ One problem that might occur is that CFITSIO might not 
be configured with the @o
 This option allows CFITSIO to open a file in multiple threads, it can thus 
provide great speed improvements.
 If CFITSIO was not configured with this option, any program which needs this 
capability will warn you and abort when you ask for multiple threads (see 
@ref{Multi-threaded operations}).
 
-To install CFITSIO from source, we strongly recommend that you have a look 
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and 
understand the options you can pass to @command{$ ./configure} (they aren't too 
much).
+To install CFITSIO from source, we strongly recommend that you have a look 
through Chapter 2 (Creating the CFITSIO library) of the CFITSIO manual and 
understand the options you can pass to @command{$ ./configure} (they are not 
too much).
 This is a very basic package for most astronomical software and it is best 
that you configure it nicely with your system.
 Once you download the source and unpack it, the following configure script 
should be enough for most purposes.
-Don't forget to read chapter two of the manual though, for example the second 
option is only for 64bit systems.
+Do Not forget to read chapter two of the manual though, for example the second 
option is only for 64bit systems.
 The manual also explains how to check if it has been installed correctly.
 
 CFITSIO comes with two executable files called @command{fpack} and 
@command{funpack}.
@@ -7663,9 +7663,9 @@ $ rm cookbook fitscopy imcopy smem speed testprog
 $ sudo make install
 @end example
 
-In the @code{./testprog > testprog.lis} step, you may confront an error, 
complaining that it can't find @file{libcfitsio.so.AAA} (where @code{AAA} is an 
integer).
-This is the library that you just built and haven't yet installed.
-But unfortunately some versions of CFITSIO don't account for this on some OSs.
+In the @code{./testprog > testprog.lis} step, you may confront an error, 
complaining that it cannot find @file{libcfitsio.so.AAA} (where @code{AAA} is 
an integer).
+This is the library that you just built and have not yet installed.
+But unfortunately some versions of CFITSIO do not account for this on some OSs.
 To fix the problem, you need to tell your OS to also look into current CFITSIO 
build directory with the first command below, afterwards, the problematic 
command (second below) should run properly.
 
 @example
@@ -7674,7 +7674,7 @@ $ ./testprog > testprog.lis
 @end example
 
 Recall that the modification above is ONLY NECESSARY FOR THIS STEP.
-@emph{Don't} put the @code{LD_LIBRARY_PATH} modification command in a 
permanent place (like your bash startup file).
+@emph{Do Not} put the @code{LD_LIBRARY_PATH} modification command in a 
permanent place (like your bash startup file).
 After installing CFITSIO, close your terminal and continue working on a new 
terminal (so @code{LD_LIBRARY_PATH} has its default value).
 For more on @code{LD_LIBRARY_PATH}, see @ref{Installation directory}.
 
@@ -7702,7 +7702,7 @@ Hence, if you will not be using those plotting functions 
in WCSLIB, you can conf
 
 If you have the cURL library @footnote{@url{https://curl.haxx.se}} on your 
system and you installed CFITSIO version 3.42 or later, you will need to also 
link with the cURL library at configure time (through the @code{-lcurl} option 
as shown below).
 CFITSIO uses the cURL library for its HTTPS (or HTTP 
Secure@footnote{@url{https://en.wikipedia.org/wiki/HTTPS}}) support and if it 
is present on your system, CFITSIO will depend on it.
-Therefore, if @command{./configure} command below fails (you don't have the 
cURL library), then remove this option and rerun it.
+Therefore, if @command{./configure} command below fails (you do not have the 
cURL library), then remove this option and rerun it.
 
 Let's assume you have downloaded 
@url{ftp://ftp.atnf.csiro.au/pub/software/wcslib/wcslib.tar.bz2, 
@file{wcslib.tar.bz2}} and are in the same directory, to configure, build, 
check and install WCSLIB follow the steps below.
 @example
@@ -7729,9 +7729,9 @@ Since these are pretty low-level tools, they are not too 
hard to install from so
 For more, see @ref{Dependencies from package managers}.
 
 @cindex GPL Ghostscript
-If the @command{./configure} script can't find any of these optional 
dependencies, it will notify you of the operation(s) you can't do due to not 
having them.
+If the @command{./configure} script cannot find any of these optional 
dependencies, it will notify you of the operation(s) you cannot do due to not 
having them.
 If you continue the build and request an operation that uses a missing 
library, Gnuastro's programs will warn that the optional library was missing at 
build-time and abort.
-Since Gnuastro was built without that library, installing the library 
afterwards won't help.
+Since Gnuastro was built without that library, installing the library 
afterwards will not help.
 The only way is to re-build Gnuastro from scratch (after the library has been 
installed).
 However, for program dependencies (like cURL or GhostScript) things are 
easier: you can install them after building Gnuastro also.
 This is because libraries are used to build the internal structure of 
Gnuastro's executables.
@@ -7744,17 +7744,17 @@ So if a dependency program becomes available later, it 
will be used next time it
 @cindex GNU Libtool
 Libtool is a program to simplify managing of the libraries to build an 
executable (a program).
 GNU Libtool has some added functionality compared to other implementations.
-If GNU Libtool isn't present on your system at configuration time, a warning 
will be printed and @ref{BuildProgram} won't be built or installed.
+If GNU Libtool is not present on your system at configuration time, a warning 
will be printed and @ref{BuildProgram} will not be built or installed.
 The configure script will look into your search path (@code{PATH}) for GNU 
Libtool through the following executable names: @command{libtool} (acceptable 
only if it is the GNU implementation) or @command{glibtool}.
 See @ref{Installation directory} for more on @code{PATH}.
 
 GNU Libtool (the binary/executable file) is a low-level program that is 
probably already present on your system, and if not, is available in your 
operating system package manager@footnote{Note that we want the 
binary/executable Libtool program which can be run on the command-line.
-In Debian-based operating systems which separate various parts of a package, 
you want want @code{libtool-bin}, the @code{libtool} package won't contain the 
executable program.}.
+In Debian-based operating systems which separate various parts of a package, 
you want want @code{libtool-bin}, the @code{libtool} package will not contain 
the executable program.}.
 If you want to install GNU Libtool's latest version from source, please visit 
its @url{https://www.gnu.org/software/libtool/, web page}.
 
 Gnuastro's tarball is shipped with an internal implementation of GNU Libtool.
 Even if you have GNU Libtool, Gnuastro's internal implementation is used for 
the building and installation of Gnuastro.
-As a result, you can still build, install and use Gnuastro even if you don't 
have GNU Libtool installed on your system.
+As a result, you can still build, install and use Gnuastro even if you do not 
have GNU Libtool installed on your system.
 However, this internal Libtool does not get installed.
 Therefore, after Gnuastro's installation, if you want to use 
@ref{BuildProgram} to compile and link your own C source code which uses the 
@ref{Gnuastro library}, you need to have GNU Libtool available on your system 
(independent of Gnuastro).
 See @ref{Review of library fundamentals} to learn more about libraries.
@@ -7788,12 +7788,12 @@ It uses Single instruction, multiple data (SIMD) 
instructions for ARM based syst
 @cindex TIFF format
 libtiff is used by ConvertType and the libraries to read TIFF images, see 
@ref{Recognized file formats}.
 @url{http://www.simplesystems.org/libtiff/, libtiff} is a very basic library 
that provides tools to read and write TIFF images, most Unix-like operating 
system graphic programs and libraries use it.
-Therefore even if you don't have it installed, it must be easily available in 
your package manager.
+Therefore even if you do not have it installed, it must be easily available in 
your package manager.
 
 @item cURL
 @cindex cURL (downloading tool)
 cURL's executable (@command{curl}) is called by @ref{Query} for submitting 
queries to remote datasets and retrieving the results.
-It isn't necessary for the build of Gnuastro from source (only a warning will 
be printed if it can't be found at configure time), so if you don't have it at 
build-time there is no problem.
+It is not necessary for the build of Gnuastro from source (only a warning will 
be printed if it cannot be found at configure time), so if you do not have it 
at build-time there is no problem.
 Just be sure to have it when you run @command{astquery}, otherwise you'll get 
an error about not finding @command{curl}.
 
 @item GPL Ghostscript
@@ -7836,14 +7836,14 @@ For the GNU Portability Library, GNU Autoconf Archive 
and @TeX{} Live, it is rec
 @cindex GNU C library
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
-To ensure portability for a wider range of operating systems (those that don't 
include GNU C library, namely glibc), Gnuastro depends on the GNU portability 
library, or Gnulib.
+To ensure portability for a wider range of operating systems (those that do 
not include GNU C library, namely glibc), Gnuastro depends on the GNU 
portability library, or Gnulib.
 Gnulib keeps a copy of all the functions in glibc, implemented (as much as 
possible) to be portable to other operating systems.
 The @file{bootstrap} script can automatically clone Gnulib (as a 
@file{gnulib/} directory inside Gnuastro), however, as described in 
@ref{Bootstrapping} this is not recommended.
 
 The recommended way to bootstrap Gnuastro is to first clone Gnulib and the 
Autoconf archives (see below) into a local directory outside of Gnuastro.
-Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or 
Autoconf archives, @file{DEVDIR} can be a directory that you don't backup.
-In this way the large number of files in these projects won't slow down your 
backup process or take bandwidth (if you backup to a remote server).}  (which 
you can set to any directory).
-Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in 
the same top directory@footnote{If you already have the Autoconf archives in a 
separate directory, or can't clone it in the same directory as Gnulib, or you 
have it with another directory name (not @file{autoconf-archive/}), you can 
follow this short step.
+Let's call it @file{DEVDIR}@footnote{If you are not a developer in Gnulib or 
Autoconf archives, @file{DEVDIR} can be a directory that you do not backup.
+In this way the large number of files in these projects will not slow down 
your backup process or take bandwidth (if you backup to a remote server).}  
(which you can set to any directory).
+Currently in Gnuastro, both Gnulib and Autoconf archives have to be cloned in 
the same top directory@footnote{If you already have the Autoconf archives in a 
separate directory, or cannot clone it in the same directory as Gnulib, or you 
have it with another directory name (not @file{autoconf-archive/}), you can 
follow this short step.
 Set @file{AUTOCONFARCHIVES} to your desired address.
 Then define a symbolic link in @file{DEVDIR} with the following command so 
Gnuastro's bootstrap script can find it:@*@command{$ ln -s $AUTOCONFARCHIVES 
$DEVDIR/autoconf-archive}.} like the case here@footnote{If your internet 
connection is active, but Git complains about the network, it might be due to 
your network setup not recognizing the git protocol.
 In that case use the following URL for the HTTP protocol instead (for Autoconf 
archives, replace the name): @command{http://git.sv.gnu.org/r/gnulib.git}}:
@@ -7888,7 +7888,7 @@ These are a large collection of tests that can be called 
to run at @command{./co
 See the explanation under GNU Portability Library (Gnulib) above for 
instructions on obtaining it and keeping it up to date.
 
 GNU Autoconf Archive is a source-based dependency of Gnuastro's bootstrapping 
process, so simply having it is enough on your computer, there is no need to 
install, and thus check anything.
-Just don't forget that it has to be in the same directory as Gnulib (described 
above).
+Just do not forget that it has to be in the same directory as Gnulib 
(described above).
 
 @item GNU Texinfo (@command{texinfo})
 @cindex GNU Texinfo
@@ -7937,10 +7937,10 @@ It is thus preferred to the @TeX{} Live versions 
distributed by your operating s
 
 To install @TeX{} Live, go to the web page and download the appropriate 
installer by following the ``download'' link.
 Note that by default the full package repository will be downloaded and 
installed (around 4 Giga Bytes) which can take @emph{very} long to download and 
to update later.
-However, most packages are not needed by everyone, it is easier, faster and 
better to install only the ``Basic scheme'' (consisting of only the most basic 
@TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You 
can also download the DVD iso file at a later time to keep as a backup for when 
you don't have internet connection if you need a package.}.
+However, most packages are not needed by everyone, it is easier, faster and 
better to install only the ``Basic scheme'' (consisting of only the most basic 
@TeX{} and @LaTeX{} packages, which is less than 200 Mega bytes)@footnote{You 
can also download the DVD iso file at a later time to keep as a backup for when 
you do not have internet connection if you need a package.}.
 
 After the installation, be sure to set the environment variables as suggested 
in the end of the outputs.
-Any time you confront (need) a package you don't have, simply install it with 
a command like below (similar to how you install software from your operating 
system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might 
get a warning complaining about a @file{missingfile}.
+Any time you confront (need) a package you do not have, simply install it with 
a command like below (similar to how you install software from your operating 
system's package manager)@footnote{After running @TeX{}, or @LaTeX{}, you might 
get a warning complaining about a @file{missingfile}.
 Run `@command{tlmgr info missingfile}' to see the package(s) containing that 
file which you can install.}.
 To install all the necessary @TeX{} packages for a successful Gnuastro 
bootstrap, run this command:
 
@@ -8013,7 +8013,7 @@ You will notice that Gnuastro's version in some operating 
systems is more than 1
 It is the same for all the dependencies of Gnuastro.
 
 @item
-For each package, Gnuastro might preform better (or require) certain 
configuration options that your distribution's package managers didn't add for 
you.
+For each package, Gnuastro might preform better (or require) certain 
configuration options that your distribution's package managers did not add for 
you.
 If present, these configuration options are explained during the installation 
of each in the sections below (for example in @ref{CFITSIO}).
 When the proper configuration has not been set, the programs should complain 
and inform you.
 
@@ -8026,7 +8026,7 @@ By reading their manuals, installing them and staying up 
to date with changes/bu
 @end enumerate
 
 Based on your package manager, you can use any of the following commands to 
install the mandatory and optional dependencies.
-If your package manager isn't included in the list below, please send us the 
respective command, so we add it.
+If your package manager is not included in the list below, please send us the 
respective command, so we add it.
 For better archivability and compression ratios, Gnuastro's recommended 
tarball compression format is with the @url{http://lzip.nongnu.org/lzip.html, 
Lzip} program, see @ref{Release tarball}.
 Therefore, the package manager commands below also contain Lzip.
 
@@ -8048,7 +8048,7 @@ All of them use Debian's Advanced Packaging Tool (APT, 
for example @command{apt-
 
 @table @asis
 @item Mandatory dependencies
-Without these, Gnuastro can't be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
+Without these, Gnuastro cannot be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
 @example
 $ sudo apt-get install libgsl-dev libcfitsio-dev \
                        wcslib-dev
@@ -8087,11 +8087,11 @@ Just make sure it is the most recent version.
 RHEL requires paid subscriptions for use of its binaries and support.
 But since it is free software, many other teams use its code to spin-off their 
own distributions based on RHEL.
 Red Hat-based GNU/Linux distributions initially used the ``Yellowdog Updated, 
Modifier'' (YUM) package manager, which has been replaced by ``Dandified yum'' 
(DNF).
-If the latter isn't available on your system, you can use @command{yum} 
instead of @command{dnf} in the command below.
+If the latter is not available on your system, you can use @command{yum} 
instead of @command{dnf} in the command below.
 
 @table @asis
 @item Mandatory dependencies
-Without these, Gnuastro can't be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
+Without these, Gnuastro cannot be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
 @example
 $ sudo dnf install gsl-devel cfitsio-devel \
                    wcslib-devel
@@ -8126,7 +8126,7 @@ If not already installed, first obtain Homebrew by 
following the instructions at
 
 @table @asis
 @item Mandatory dependencies
-Without these, Gnuastro can't be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
+Without these, Gnuastro cannot be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
 
 Homebrew manages packages in different `taps'.
 To install WCSLIB via Homebrew you will need to @command{tap} into 
@command{brewsci/science} first (the tap may change in the future, but can be 
found by calling @command{brew search wcslib}).
@@ -8159,7 +8159,7 @@ Arch GNU/Linux uses ``Package manager'' (Pacman) to 
manage its packages/componen
 
 @table @asis
 @item Mandatory dependencies
-Without these, Gnuastro can't be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
+Without these, Gnuastro cannot be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
 @example
 $ sudo pacman -S gsl cfitsio wcslib
 @end example
@@ -8175,7 +8175,7 @@ $ sudo pacman -S ghostscript libtool libjpeg \
 These are not used in Gnuastro's build.
 They can just help in viewing the inputs/outputs independent of Gnuastro!
 
-SAO DS9 and TOPCAT aren't available in the standard Arch GNU/Linux 
repositories.
+SAO DS9 and TOPCAT are not available in the standard Arch GNU/Linux 
repositories.
 However, installing and using both is very easy from their own web pages, as 
described in @ref{SAO DS9} and @ref{TOPCAT}.
 @end table
 
@@ -8196,7 +8196,7 @@ $ ./configure CPPFLAGS="-I/usr/include/cfitsio"
 @end example
 
 @item Mandatory dependencies
-Without these, Gnuastro can't be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
+Without these, Gnuastro cannot be built, they are necessary for input/output 
and low-level mathematics (see @ref{Mandatory dependencies})!
 @example
 $ sudo zypper install gsl-devel cfitsio-devel \
                       wcslib-devel
@@ -8230,7 +8230,7 @@ The most common of such problems and their solution are 
discussed below.
 
 @cartouche
 @noindent
-@strong{Not finding library during configuration:} If a library is installed, 
but during Gnuastro's @command{configure} step the library isn't found, then 
configure Gnuastro like the command below (correcting @file{/path/to/lib}).
+@strong{Not finding library during configuration:} If a library is installed, 
but during Gnuastro's @command{configure} step the library is not found, then 
configure Gnuastro like the command below (correcting @file{/path/to/lib}).
 For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure LDFLAGS="-L/path/to/lib"
@@ -8239,7 +8239,7 @@ $ ./configure LDFLAGS="-L/path/to/lib"
 
 @cartouche
 @noindent
-@strong{Not finding header (.h) files while building:} If a library is 
installed, but during Gnuastro's @command{make} step, the library's header 
(file with a @file{.h} suffix) isn't found, then configure Gnuastro like the 
command below (correcting @file{/path/to/include}).
+@strong{Not finding header (.h) files while building:} If a library is 
installed, but during Gnuastro's @command{make} step, the library's header 
(file with a @file{.h} suffix) is not found, then configure Gnuastro like the 
command below (correcting @file{/path/to/include}).
 For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ ./configure CPPFLAGS="-I/path/to/include"
@@ -8248,8 +8248,8 @@ $ ./configure CPPFLAGS="-I/path/to/include"
 
 @cartouche
 @noindent
-@strong{Gnuastro's programs don't run during check or after install:}
-If a library is installed, but the programs don't run due to linking problems, 
set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro is 
installed in @file{/path/to/installed}).
+@strong{Gnuastro's programs do not run during check or after install:}
+If a library is installed, but the programs do not run due to linking 
problems, set the @code{LD_LIBRARY_PATH} variable like below (assuming Gnuastro 
is installed in @file{/path/to/installed}).
 For more, see @ref{Known issues} and @ref{Installation directory}.
 @example
 $ export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/path/to/installed/lib"
@@ -8352,7 +8352,7 @@ The version controlled files in the top cloned directory 
are either mainly in ca
 The former are non-programming, standard writing for human readers containing 
high-level information about the whole package.
 The latter are instructions to customize the GNU build system for Gnuastro.
 For more on Gnuastro's source code structure, please see @ref{Developing}.
-We won't go any deeper here.
+We will not go any deeper here.
 
 The cloned Gnuastro source cannot immediately be configured, compiled, or 
installed since it only contains hand-written files, not automatically 
generated or imported files which do all the hard work of the build process.
 See @ref{Bootstrapping} for the process of generating and importing those 
files (its not too hard!).
@@ -8410,7 +8410,7 @@ To do better, see the next paragraph.
 
 The recommended way to get these two packages is thoroughly discussed in 
@ref{Bootstrapping dependencies} (in short: clone them in the separate 
@file{DEVDIR/} directory).
 The following commands will take you into the cloned Gnuastro directory and 
run the @file{bootstrap} script, while telling it to copy some files (instead 
of making symbolic links, with the @option{--copy} option, this is not 
mandatory@footnote{The @option{--copy} option is recommended because some 
backup systems might do strange things with symbolic links.}) and where to look 
for Gnulib (with the @option{--gnulib-srcdir} option).
-Please note that the address given to @option{--gnulib-srcdir} has to be an 
absolute address (so don't use @file{~} or @file{../} for example).
+Please note that the address given to @option{--gnulib-srcdir} has to be an 
absolute address (so do not use @file{~} or @file{../} for example).
 
 @example
 $ cd $TOPGNUASTRO/gnuastro
@@ -8423,7 +8423,7 @@ $ ./bootstrap --copy --gnulib-srcdir=$DEVDIR/gnulib
 @cindex GNU Automake
 @cindex GNU C library
 @cindex GNU build system
-Since Gnulib and Autoconf archives are now available in your local 
directories, you don't need an internet connection every time you decide to 
remove all untracked files and redo the bootstrap (see box below).
+Since Gnulib and Autoconf archives are now available in your local 
directories, you do not need an internet connection every time you decide to 
remove all untracked files and redo the bootstrap (see box below).
 You can also use the same command on any other project that uses Gnulib.
 All the necessary GNU C library functions, Autoconf macros and Automake inputs 
are now available along with the book figures.
 The standard GNU build system (@ref{Quick start}) will do the rest of the job.
@@ -8442,7 +8442,7 @@ git clean -fxd
 
 @noindent
 It is best to commit any recent change before running this command.
-You might have created new files since the last commit and if they haven't 
been committed, they will all be gone forever (using @command{rm}).
+You might have created new files since the last commit and if they have not 
been committed, they will all be gone forever (using @command{rm}).
 To get a list of the non-version controlled files instead of deleting them, 
add the @option{n} option to @command{git clean}, so it becomes @option{-fxdn}.
 @end cartouche
 
@@ -8450,7 +8450,7 @@ Besides the @file{bootstrap} and @file{bootstrap.conf}, 
the @file{bootstrapped/}
 The former hosts all the imported (bootstrapped) directories.
 Thus, in the version controlled source, it only contains a @file{README} file, 
but in the distributed tar-ball it also contains sub-directories filled with 
all bootstrapped files.
 @file{README-hacking} contains a summary of the bootstrapping process 
discussed in this section.
-It is a necessary reference when you haven't built this book yet.
+It is a necessary reference when you have not built this book yet.
 It is thus not distributed in the Gnuastro tarball.
 
 
@@ -8464,7 +8464,7 @@ Gnuastro has two mailing lists dedicated to its 
developing activities (see @ref{
 Subscribing to them can help you decide when to synchronize with the official 
repository.
 
 To pull all the most recent work in Gnuastro, run the following command from 
the top Gnuastro directory.
-If you don't already have a built system, ignore @command{make distclean}.
+If you do not already have a built system, ignore @command{make distclean}.
 The separate steps are described in detail afterwards.
 
 @example
@@ -8483,7 +8483,7 @@ $ autoreconf -f
 @cindex GNU Autoconf
 @cindex Mailing list: info-gnuastro
 @cindex @code{info-gnuastro@@gnu.org}
-If Gnuastro was already built in this directory, you don't want some outputs 
from the previous version being mixed with outputs from the newly pulled work.
+If Gnuastro was already built in this directory, you do not want some outputs 
from the previous version being mixed with outputs from the newly pulled work.
 Therefore, the first step is to clean/delete all the built files with 
@command{make distclean}.
 Fortunately the GNU build system allows the separation of source and built 
files (in separate directories).
 This is a great feature to keep your source directory clean and you can use it 
to avoid the cleaning step.
@@ -8501,7 +8501,7 @@ The version number is included in nearly all outputs of 
Gnuastro's programs, the
 As a summary, be sure to run `@command{autoreconf -f}' after every change in 
the Git history.
 This includes synchronization with the main server or even a commit you have 
made yourself.
 
-If you would like to see what has changed since you last synchronized your 
local clone, you can take the following steps instead of the simple command 
above (don't type anything after @code{#}):
+If you would like to see what has changed since you last synchronized your 
local clone, you can take the following steps instead of the simple command 
above (do not type anything after @code{#}):
 
 @example
 $ git checkout master             # Confirm if you are on master.
@@ -8537,7 +8537,7 @@ Alternatively, run @command{make distcheck} and upload 
the output @file{gnuastro
 @section Build and install
 
 This section is basically a longer explanation to the sequence of commands 
given in @ref{Quick start}.
-If you didn't have any problems during the @ref{Quick start} steps, you want 
to have all the programs of Gnuastro installed in your system, you don't want 
to change the executable names during or after installation, you have root 
access to install the programs in the default system wide directory, the Letter 
paper size of the print book is fine for you or as a summary you don't feel 
like going into the details when everything is working, you can safely skip 
this section.
+If you did not have any problems during the @ref{Quick start} steps, you want 
to have all the programs of Gnuastro installed in your system, you do not want 
to change the executable names during or after installation, you have root 
access to install the programs in the default system wide directory, the Letter 
paper size of the print book is fine for you or as a summary you do not feel 
like going into the details when everything is working, you can safely skip 
this section.
 
 If you have any of the above problems or you want to understand the details 
for a better control over your build and install, read along.
 The dependencies which you will need prior to configuring, building and 
installing Gnuastro are explained in @ref{Dependencies}.
@@ -8580,7 +8580,7 @@ $ ./configure --help
 @cindex GNU Autoconf
 A complete explanation is also included in the @file{INSTALL} file.
 Note that this file was written by the authors of GNU Autoconf (which builds 
the @file{configure} script), therefore it is common for all programs which use 
the @command{$ ./configure} script for building and installing, not just 
Gnuastro.
-Here we only discuss cases where you don't have super-user access to the 
system and if you want to change the executable names.
+Here we only discuss cases where you do not have super-user access to the 
system and if you want to change the executable names.
 But before that, a review of the options to configure that are particular to 
Gnuastro are discussed.
 
 @menu
@@ -8612,7 +8612,7 @@ By default, libraries are also built for static 
@emph{and} shared linking (see @
 However, when there are crashes or unexpected behavior, these three features 
can hinder the process of localizing the problem.
 This configuration option is identical to manually calling the configuration 
script with @code{CFLAGS="-g -O0" --disable-shared}.
 
-In the (rare) situations where you need to do your debugging on the shared 
libraries, don't use this option.
+In the (rare) situations where you need to do your debugging on the shared 
libraries, do not use this option.
 Instead run the configure script by explicitly setting @code{CFLAGS} like this:
 @example
 $ ./configure CFLAGS="-g -O0"
@@ -8635,7 +8635,7 @@ Only build and install @file{progname} along with any 
other program that is enab
 @file{progname} is the name of the executable without the @file{ast}, for 
example @file{crop} for Crop (with the executable name of @file{astcrop}).
 
 Note that by default all the programs will be installed.
-This option (and the @option{--disable-progname} options) are only relevant 
when you don't want to install all the programs.
+This option (and the @option{--disable-progname} options) are only relevant 
when you do not want to install all the programs.
 Therefore, if this option is called for any of the programs in Gnuastro, any 
program which is not explicitly enabled will not be built or installed.
 
 @item --disable-progname
@@ -8654,7 +8654,7 @@ Listing the disabled programs is redundant.
 @cindex Gnulib: GNU Portability Library
 @cindex GNU Portability Library (Gnulib)
 Enable checks on the GNU Portability Library (Gnulib).
-Gnulib is used by Gnuastro to enable users of non-GNU based operating systems 
(that don't use GNU C library or glibc) to compile and use the advanced 
features that this library provides.
+Gnulib is used by Gnuastro to enable users of non-GNU based operating systems 
(that do not use GNU C library or glibc) to compile and use the advanced 
features that this library provides.
 We make extensive use of such functions.
 If you give this option to @command{$ ./configure}, when you run @command{$ 
make check}, first the functions in Gnulib will be tested, then the Gnuastro 
executables.
 If your operating system does not support glibc or has an older version of it 
and you have problems in the build process (@command{$ make}), you can give 
this flag to configure to see if the problem is caused by Gnulib not supporting 
your operating system or Gnuastro, see @ref{Known issues}.
@@ -8697,10 +8697,10 @@ libtiff is an optional dependency, with this option, 
Gnuastro will ignore any po
 The tests of some programs might depend on the outputs of the tests of other 
programs.
 For example MakeProfiles is one the first programs to be tested when you run 
@command{$ make check}.
 MakeProfiles' test outputs (FITS images) are inputs to many other programs 
(which in turn provide inputs for other programs).
-Therefore, if you don't install MakeProfiles for example, the tests for many 
the other programs will be skipped.
+Therefore, if you do not install MakeProfiles for example, the tests for many 
the other programs will be skipped.
 To avoid this, in one run, you can install all the programs and run the tests 
but not install.
 If everything is working correctly, you can run configure again with only the 
programs you want.
-However, don't run the tests and directly install after building.
+However, do not run the tests and directly install after building.
 
 
 
@@ -8713,8 +8713,8 @@ However, don't run the tests and directly install after 
building.
 @cindex No access to super-user install
 @cindex Install with no super-user access
 One of the most commonly used options to @file{./configure} is 
@option{--prefix}, it is used to define the directory that will host all the 
installed files (or the ``prefix'' in their final absolute file name).
-For example, when you are using a server and you don't have administrator or 
root access.
-In this example scenario, if you don't use the @option{--prefix} option, you 
won't be able to install the built files and thus access them from anywhere 
without having to worry about where they are installed.
+For example, when you are using a server and you do not have administrator or 
root access.
+In this example scenario, if you do not use the @option{--prefix} option, you 
will not be able to install the built files and thus access them from anywhere 
without having to worry about where they are installed.
 However, once you prepare your startup file to look into the proper place (as 
discussed thoroughly below), you will be able to easily use this option and 
benefit from any software you want to install without having to ask the system 
administrators or install and use a different version of a software that is 
already installed on the server.
 
 The most basic way to run an executable is to explicitly write its full file 
name (including all the directory information) and run it.
@@ -8734,7 +8734,7 @@ $ /usr/bin/ls
 To address this problem, we have the @file{PATH} environment variable.
 To understand it better, we will start with a short introduction to the shell 
variables.
 Shell variable values are basically treated as strings of characters.
-For example, it doesn't matter if the value is a name (string of 
@emph{alphabetic} characters), or a number (string of @emph{numeric} 
characters), or both.
+For example, it does not matter if the value is a name (string of 
@emph{alphabetic} characters), or a number (string of @emph{numeric} 
characters), or both.
 You can define a variable and a value for it by running
 @example
 $ myvariable1=a_test_value
@@ -8750,7 +8750,7 @@ $ echo $myvariable2
 @noindent
 @cindex Environment
 @cindex Environment variables
-If a variable has no value or it wasn't defined, the last command will only 
print an empty line.
+If a variable has no value or it was not defined, the last command will only 
print an empty line.
 A variable defined like this will be known as long as this shell or terminal 
is running.
 Other terminals will have no idea it existed.
 The main advantage of shell variables is that if they are exported@footnote{By 
running @command{$ export myvariable=a_test_value} instead of the simpler case 
in the text}, subsequent programs that are run within that shell can access 
their value.
@@ -8786,7 +8786,7 @@ $ echo $PATH
 @end example
 
 By default @file{PATH} usually contains system-wide directories, which are 
readable (but not writable) by all users, like the above example.
-Therefore if you don't have root (or administrator) access, you need to add 
another directory to @file{PATH} which you actually have write access to.
+Therefore if you do not have root (or administrator) access, you need to add 
another directory to @file{PATH} which you actually have write access to.
 The standard directory where you can keep installed files (not just 
executables) for your own user is the @file{~/.local/} directory.
 The names of hidden files start with a `@key{.}' (dot), so it will not show up 
in your common command-line listings, or on the graphical user interface.
 You can use any other directory, but this is the most recognized.
@@ -8865,7 +8865,7 @@ $ ./configure LDFLAGS=-L/home/name/.local/lib            \
 
 It can be annoying/buggy to do this when configuring every software that 
depends on such libraries.
 Hence, you can define these two variables in the most relevant startup file 
(discussed above).
-The convention on using these variables doesn't include a colon to separate 
values (as @command{PATH}-like variables do).
+The convention on using these variables does not include a colon to separate 
values (as @command{PATH}-like variables do).
 They use white space characters and each value is prefixed with a compiler 
option@footnote{These variables are ultimately used as options while building 
the programs.
 Therefore every value has be an option name followed be a value as discussed 
in @ref{Options}.}.
 Note the @option{-L} and @option{-I} above (see @ref{Options}), for 
@option{-I} see @ref{Headers}, and for @option{-L}, see @ref{Linking}.
@@ -8885,7 +8885,7 @@ To use programs that depend on these libraries, you need 
to add @file{~/.local/l
 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/name/.local/lib
 @end example
 
-If you also want to access the Info (see @ref{Info}) and man pages (see 
@ref{Man pages}) documentations add @file{~/.local/share/info} and 
@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the 
following convention: ``If the value of @command{INFOPATH} ends with a colon 
[or it isn't defined] ..., the initial list of directories is constructed by 
appending the build-time default to the value of @command{INFOPATH}.''
+If you also want to access the Info (see @ref{Info}) and man pages (see 
@ref{Man pages}) documentations add @file{~/.local/share/info} and 
@file{~/.local/share/man} to your @command{INFOPATH}@footnote{Info has the 
following convention: ``If the value of @command{INFOPATH} ends with a colon 
[or it is not defined] ..., the initial list of directories is constructed by 
appending the build-time default to the value of @command{INFOPATH}.''
 So when installing in a non-standard directory and if @command{INFOPATH} was 
not initially defined, add a colon to the end of @command{INFOPATH} as shown 
below.
 Otherwise Info will not be able to find system-wide installed documentation:
 @*@command{echo 'export INFOPATH=$INFOPATH:/home/name/.local/share/info:' >> 
~/.bashrc}@*
@@ -8903,7 +8903,7 @@ export 
LD_LIBRARY_PATH=/home/name/.local/lib:$LD_LIBRARY_PATH
 @end example
 
 @noindent
-This is good when a library, for example CFITSIO, is already present on the 
system, but the system-wide install wasn't configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you don't have administrator or root access to update it on the whole 
system/server.
+This is good when a library, for example CFITSIO, is already present on the 
system, but the system-wide install was not configured with the correct 
configuration flags (see @ref{CFITSIO}), or you want to use a newer version and 
you do not have administrator or root access to update it on the whole 
system/server.
 If you update @file{LD_LIBRARY_PATH} by placing @file{~/.local/lib} first 
(like above), the linker will first find the CFITSIO you installed for yourself 
and link with it.
 It thus will never reach the system-wide installation.
 
@@ -8912,9 +8912,9 @@ So if you choose to search in your customized directory 
first, please @emph{be s
 
 @cartouche
 @noindent
-@strong{Summary:} When you are using a server which doesn't give you 
administrator/root access AND you would like to give priority to your own built 
programs and libraries, not the version that is (possibly already) present on 
the server, add these lines to your startup file.
+@strong{Summary:} When you are using a server which does not give you 
administrator/root access AND you would like to give priority to your own built 
programs and libraries, not the version that is (possibly already) present on 
the server, add these lines to your startup file.
 See above for which startup file is best for your case and for a detailed 
explanation on each.
-Don't forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example `@file{/home/your-id}'):
+Do Not forget to replace `@file{/YOUR-HOME-DIR}' with your home directory (for 
example `@file{/home/your-id}'):
 
 @example
 export PATH="/YOUR-HOME-DIR/.local/bin:$PATH"
@@ -8937,7 +8937,7 @@ Everything else will be the same as a standard build and 
install, see @ref{Quick
 @cindex Names of executables
 At first sight, the names of the executables for each program might seem to be 
uncommonly long, for example @command{astnoisechisel} or @command{astcrop}.
 We could have chosen terse (and cryptic) names like most programs do.
-We chose this complete naming convention (something like the commands in 
@TeX{}) so you don't have to spend too much time remembering what the name of a 
specific program was.
+We chose this complete naming convention (something like the commands in 
@TeX{}) so you do not have to spend too much time remembering what the name of 
a specific program was.
 Such complete names also enable you to easily search for the programs.
 
 @cindex Shell auto-complete
@@ -9080,15 +9080,15 @@ Here is the full list of options you can feed to this 
script to configure its op
 @noindent
 @strong{Not all Gnuastro's common program behavior usable here:}
 @file{developer-build} is just a non-installed script with a very limited 
scope as described above.
-It thus doesn't have all the common option behaviors or configuration files 
for example.
+It thus does not have all the common option behaviors or configuration files 
for example.
 @end cartouche
 
 @cartouche
 @noindent
 @strong{White space between option and value:} @file{developer-build}
-doesn't accept an @key{=} sign between the options and their values.
+does not accept an @key{=} sign between the options and their values.
 It also needs at least one character between the option and its value.
-Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while 
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} aren't.
+Therefore @option{-n 4} or @option{--numthreads 4} are acceptable, while 
@option{-n4}, @option{-n=4}, or @option{--numthreads=4} are not.
 Finally multiple short option names cannot be merged: for example you can say 
@option{-c -n 4}, but unlike Gnuastro's programs, @option{-cn4} is not 
acceptable.
 @end cartouche
 
@@ -9103,7 +9103,7 @@ Just copy it in the top source directory of that software 
and run it from there.
 @item -b STR
 @itemx --top-build-dir STR
 The top build directory to make a directory for the build.
-If this option isn't called, the top build directory is @file{/dev/shm} (only 
available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
+If this option is not called, the top build directory is @file{/dev/shm} (only 
available in GNU/Linux operating systems, see @ref{Configure and build in RAM}).
 
 @item -V
 @itemx --version
@@ -9121,7 +9121,7 @@ In Gnuastro's build system, the Git description will be 
used as the version, see
 @cindex GNU Autoreconf
 Delete the contents of the build directory (clean it) before starting the 
configuration and building of this run.
 
-This is useful when you have recently pulled changes from the main Git 
repository, or committed a change your self and ran @command{autoreconf -f}, 
see @ref{Synchronizing}.
+This is useful when you have recently pulled changes from the main Git 
repository, or committed a change yourself and ran @command{autoreconf -f}, see 
@ref{Synchronizing}.
 After running GNU Autoconf, the version will be updated and you need to do a 
clean build.
 
 @item -d
@@ -9152,7 +9152,7 @@ As the name suggests (Make has an identical option), the 
number given to this op
 @item -C
 @itemx --check
 After finishing the build, also run @command{make check}.
-By default, @command{make check} isn't run because the developer usually has 
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
+By default, @command{make check} is not run because the developer usually has 
their own checks to work on (for example defined in @file{tests/during-dev.sh}).
 
 @item -i
 @itemx --install
@@ -9161,7 +9161,7 @@ After finishing the build, also run @command{make 
install}.
 @item -D
 @itemx --dist
 Run @code{make dist-lzip pdf} to build a distribution tarball (in 
@file{.tar.lz} format) and a PDF manual.
-This can be useful for archiving, or sending to colleagues who don't use Git 
for an easy build and manual.
+This can be useful for archiving, or sending to colleagues who do not use Git 
for an easy build and manual.
 
 @item -u STR
 @item --upload STR
@@ -9215,7 +9215,7 @@ $ make check
 For every program some tests are designed to check some possible operations.
 Running the command above will run those tests and give you a final report.
 If everything is OK and you have built all the programs, all the tests should 
pass.
-In case any of the tests fail, please have a look at @ref{Known issues} and if 
that still doesn't fix your problem, look that the 
@file{./tests/test-suite.log} file to see if the source of the error is 
something particular to your system or more general.
+In case any of the tests fail, please have a look at @ref{Known issues} and if 
that still does not fix your problem, look that the 
@file{./tests/test-suite.log} file to see if the source of the error is 
something particular to your system or more general.
 If you feel it is general, please contact us because it might be a bug.
 Note that the tests of some programs depend on the outputs of other program's 
tests, so if you have not installed them they might be skipped or fail.
 Prior to releasing every distribution all these tests are checked.
@@ -9323,13 +9323,13 @@ directory}.
 @command{$ make}: @emph{Complains about an unknown function on a non-GNU based 
operating system.}
 In this case, please run @command{$ ./configure} with the 
@option{--enable-gnulibcheck} option to see if the problem is from the GNU 
Portability Library (Gnulib) not supporting your system or if there is a 
problem in Gnuastro, see @ref{Gnuastro configure options}.
 If the problem is not in Gnulib and after all its tests you get the same 
complaint from @command{make}, then please contact us at 
@file{bug-gnuastro@@gnu.org}.
-The cause is probably that a function that we have used is not supported by 
your operating system and we didn't included it along with the source tar ball.
+The cause is probably that a function that we have used is not supported by 
your operating system and we did not included it along with the source tar ball.
 If the function is available in Gnulib, it can be fixed immediately.
 
 @item
 @cindex @command{CPPFLAGS}
-@command{$ make}: @emph{Can't find the headers (.h files) of installed 
libraries.}
-Your C pre-processor (CPP) isn't looking in the right place.
+@command{$ make}: @emph{Cannot find the headers (.h files) of installed 
libraries.}
+Your C pre-processor (CPP) is not looking in the right place.
 To fix this, configure Gnuastro with an additional @code{CPPFLAGS} like below 
(assuming the library is installed in @file{/usr/local/include}:
 
 @example
@@ -9341,9 +9341,9 @@ If you want to use the libraries for your other 
programming projects, then expor
 @cindex Tests, only one passes
 @cindex @file{LD_LIBRARY_PATH}
 @item
-@command{$ make check}: @emph{Only the first couple of tests pass, all the 
rest fail or get skipped.}  It is highly likely that when searching for shared 
libraries, your system doesn't look into the @file{/usr/local/lib} directory 
(or wherever you installed Gnuastro or its dependencies).
+@command{$ make check}: @emph{Only the first couple of tests pass, all the 
rest fail or get skipped.}  It is highly likely that when searching for shared 
libraries, your system does not look into the @file{/usr/local/lib} directory 
(or wherever you installed Gnuastro or its dependencies).
 To make sure it is added to the list of directories, add the following line to 
your @file{~/.bashrc} file and restart your terminal.
-Don't forget to change @file{/usr/local/lib} if the libraries are installed in 
other (non-standard) directories.
+Do Not forget to change @file{/usr/local/lib} if the libraries are installed 
in other (non-standard) directories.
 
 @example
 export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/lib"
@@ -9369,7 +9369,7 @@ A working @TeX{} program is also necessary, which you can 
get from Tex Live@foot
 @item
 @cindex GNU Libtool
 After @code{make check}: do not copy the programs' executables to another (for 
example, the installation) directory manually (using @command{cp}, or 
@command{mv} for example).
-In the default configuration@footnote{If you configure Gnuastro with the 
@option{--disable-shared} option, then the libraries will be statically linked 
to the programs and this problem won't exist, see @ref{Linking}.}, the program 
binaries need to link with Gnuastro's shared library which is also built and 
installed with the programs.
+In the default configuration@footnote{If you configure Gnuastro with the 
@option{--disable-shared} option, then the libraries will be statically linked 
to the programs and this problem will not exist, see @ref{Linking}.}, the 
program binaries need to link with Gnuastro's shared library which is also 
built and installed with the programs.
 Therefore, to run successfully before and after installation, linking 
modifications need to be made by GNU Libtool at installation time.
 @command{make install} does this internally, but a simple copy might give 
linking errors when you run it.
 If you need to copy the executables, you can do so after installation.
@@ -9436,12 +9436,12 @@ It is very natural to forget the options of a program, 
their current default val
 Gnuastro's programs have multiple ways to help you refresh your memory in 
multiple levels (just an option name, a short description, or fast access to 
the relevant section of the manual.
 See @ref{Getting help} for more for more on benefiting from this very 
convenient feature.
 
-Many of the programs use the multi-threaded character of modern CPUs, in 
@ref{Multi-threaded operations} we'll discuss how you can configure this 
behavior, along with some tips on making best use of them.
-In @ref{Numeric data types}, we'll review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{For 
example if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
+Many of the programs use the multi-threaded character of modern CPUs, in 
@ref{Multi-threaded operations} we will discuss how you can configure this 
behavior, along with some tips on making best use of them.
+In @ref{Numeric data types}, we will review the various types to store numbers 
in your datasets: setting the proper type for the usage context@footnote{For 
example if the values in your dataset can only be integers between 0 or 65000, 
store them in a unsigned 16-bit type, not 64-bit floating point type (which is 
the default in most systems).
 It takes four times less space and is much faster to process.} can greatly 
improve the file size and also speed of reading, writing or processing them.
 
-We'll then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
-Finally, we'll take a look at the behavior regarding output files: 
@ref{Automatic output} describes how the programs set a default name for their 
output when you don't give one explicitly (using @option{--output}).
+We Will then look into the recognized table formats in @ref{Tables} and how 
large datasets are broken into tiles, or mesh grid in @ref{Tessellation}.
+Finally, we will take a look at the behavior regarding output files: 
@ref{Automatic output} describes how the programs set a default name for their 
output when you do not give one explicitly (using @option{--output}).
 When the output is a FITS file, all the programs also store some very useful 
information in the header that is discussed in @ref{Output FITS files}.
 
 @menu
@@ -9463,8 +9463,8 @@ When the output is a FITS file, all the programs also 
store some very useful inf
 
 Gnuastro's programs are customized through the standard Unix-like command-line 
environment and GNU style command-line options.
 Both are very common in many Unix-like operating system programs.
-In @ref{Arguments and options} we'll start with the difference between 
arguments and options and elaborate on the GNU style of options.
-Afterwards, in @ref{Common options}, we'll go into the detailed list of all 
the options that are common to all the programs in Gnuastro.
+In @ref{Arguments and options} we will start with the difference between 
arguments and options and elaborate on the GNU style of options.
+Afterwards, in @ref{Common options}, we will go into the detailed list of all 
the options that are common to all the programs in Gnuastro.
 
 @menu
 * Arguments and options::       Different ways to specify inputs and 
configuration.
@@ -9503,11 +9503,11 @@ Through options you can configure how the program runs 
(interprets the data you
 @vindex --help
 @vindex --usage
 @cindex Mandatory arguments
-Arguments can be mandatory and optional and unlike options, they don't have 
any identifiers.
+Arguments can be mandatory and optional and unlike options, they do not have 
any identifiers.
 Hence, when there multiple arguments, their order might also matter (for 
example in @command{cp} which is used for copying one file to another location).
 The outputs of @option{--usage} and @option{--help} shows which arguments are 
optional and which are mandatory, see @ref{--usage}.
 
-As their name suggests, @emph{options} can be considered to be optional and 
most of the time, you don't have to worry about what order you specify them in.
+As their name suggests, @emph{options} can be considered to be optional and 
most of the time, you do not have to worry about what order you specify them in.
 When the order does matter, or the option can be invoked multiple times, it is 
explicitly mentioned in the ``Invoking ProgramName'' section of each program 
(this is a very important aspect of an option).
 
 @cindex Metacharacters on the command-line In case your arguments or option 
values contain any of the shell's meta-characters, you have to quote them.
@@ -9551,7 +9551,7 @@ The recognized suffixes for these formats are listed 
there.
 
 The list below shows the recognized suffixes for FITS data files in Gnuastro's 
programs.
 However, in some scenarios FITS writers may not append a suffix to the file, 
or use a non-recognized suffix (not in the list below).
-Therefore if a FITS file is expected, but it doesn't have any of these 
suffixes, Gnuastro programs will look into the contents of the file and if it 
does conform with the FITS standard, the file will be used.
+Therefore if a FITS file is expected, but it does not have any of these 
suffixes, Gnuastro programs will look into the contents of the file and if it 
does conform with the FITS standard, the file will be used.
 Just note that checking about 5 characters at the end of a name string is much 
more efficient than opening and checking the contents of a file, so it is 
generally recommended to have a recognized FITS suffix.
 
 @itemize
@@ -9578,11 +9578,9 @@ Just note that checking about 5 characters at the end of 
a name string is much m
 
 Through out this book and in the command-line outputs, whenever we want to 
generalize all such astronomical data formats in a text place-holder, we will 
use @file{ASTRdata}, we will assume that the extension is also part of this 
name.
 Any file ending with these names is directly passed on to CFITSIO to read.
-Therefore you don't necessarily have to have these files on your computer, 
they can also be located on an FTP or HTTP server too, see the CFITSIO manual 
for more information.
+Therefore you do not necessarily have to have these files on your computer, 
they can also be located on an FTP or HTTP server too, see the CFITSIO manual 
for more information.
 
-CFITSIO has its own error reporting techniques, if your input file(s)
-cannot be opened, or read, those errors will be printed prior to the
-final error by Gnuastro.
+CFITSIO has its own error reporting techniques, if your input file(s) cannot 
be opened, or read, those errors will be printed prior to the final error by 
Gnuastro.
 
 
 
@@ -9601,13 +9599,13 @@ Both formats are shown for those which support both.
 First the short is shown then the long.
 
 Usually, the short options are for when you are writing on the command-line 
and want to save keystrokes and time.
-The long options are good for shell scripts, where you aren't usually rushing.
+The long options are good for shell scripts, where you are not usually rushing.
 Long options provide a level of documentation, since they are more descriptive 
and less cryptic.
 Usually after a few months of not running a program, the short options will be 
forgotten and reading your previously written script will not be easy.
 
 @cindex On/Off options
 @cindex Options, on/off
-Some options need to be given a value if they are called and some don't.
+Some options need to be given a value if they are called and some do not.
 You can think of the latter type of options as on/off options.
 These two types of options can be distinguished using the output of the 
@option{--help} and @option{--usage} options, which are common to all GNU 
software, see @ref{Getting help}.
 In Gnuastro we use the following strings to specify when the option needs a 
value and what format that value should be in.
@@ -9638,10 +9636,10 @@ In many cases, other formats may also be accepted (for 
example input tables can
 @cindex Values to options
 @cindex Option values
 To specify a value in the short format, simply put the value after the option.
-Note that since the short options are only one character long, you don't have 
to type anything between the option and its value.
+Note that since the short options are only one character long, you do not have 
to type anything between the option and its value.
 For the long option you either need white space or an @option{=} sign, for 
example @option{-h2}, @option{-h 2}, @option{--hdu 2} or @option{--hdu=2} are 
all equivalent.
 
-The short format of on/off options (those that don't need values) can be 
concatenated for example these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
+The short format of on/off options (those that do not need values) can be 
concatenated for example these two hypothetical sequences of options are 
equivalent: @option{-a -b -c4} and @option{-abc4}.
 As an example, consider the following command to run Crop:
 @example
 $ astcrop -Dr3 --wwidth 3 catalog.txt --deccol=4 ASTRdata
@@ -9667,7 +9665,7 @@ This is very useful when you are testing/experimenting.
 Let's say you want to make a small modification to one option value.
 You can simply type the option with a new value in the end of the command and 
see how the script works.
 If you are satisfied with the change, you can remove the original option for 
human readability.
-If the change wasn't satisfactory, you can remove the one you just added and 
not worry about forgetting the original value.
+If the change was not satisfactory, you can remove the one you just added and 
not worry about forgetting the original value.
 Without this capability, you would have to memorize or save the original value 
somewhere else, run the command and then change the value again which is not at 
all convenient and is potentially cause lots of bugs.
 
 On the other hand, some options can be called multiple times in one run of a 
program and can thus take multiple values (for example see the 
@option{--column} option in @ref{Invoking asttable}.
@@ -9675,9 +9673,9 @@ In these cases, the order of stored values is the same 
order that you specified
 
 @cindex Configuration files
 @cindex Default option values
-Gnuastro's programs don't keep any internal default values, so some options 
are mandatory and if they don't have a value, the program will complain and 
abort.
+Gnuastro's programs do not keep any internal default values, so some options 
are mandatory and if they do not have a value, the program will complain and 
abort.
 Most programs have many such options and typing them by hand on every call is 
impractical.
-To facilitate the user experience, after parsing the command-line, Gnuastro's 
programs read special configuration files to get the necessary values for the 
options you haven't identified on the command-line.
+To facilitate the user experience, after parsing the command-line, Gnuastro's 
programs read special configuration files to get the necessary values for the 
options you have not identified on the command-line.
 These configuration files are fully described in @ref{Configuration files}.
 
 @cartouche
@@ -9706,7 +9704,7 @@ You can assume by default that counting starts from 1, if 
it starts from 0 for a
 @cindex Gnuastro common options
 To facilitate the job of the users and developers, all the programs in 
Gnuastro share some basic command-line options for the options that are common 
to many of the programs.
 The full list is classified as @ref{Input output options}, @ref{Processing 
options}, and @ref{Operating mode options}.
-In some programs, some of the options are irrelevant, but still recognized 
(you won't get an unrecognized option error, but the value isn't used).
+In some programs, some of the options are irrelevant, but still recognized 
(you will not get an unrecognized option error, but the value is not used).
 Unless otherwise mentioned, these options are identical between all programs.
 
 @menu
@@ -9728,7 +9726,7 @@ programs.
 @item --stdintimeout
 Number of micro-seconds to wait for writing/typing in the @emph{first line} of 
standard input from the command-line (see @ref{Standard input}).
 This is only relevant for programs that also accept input from the standard 
input, @emph{and} you want to manually write/type the contents on the terminal.
-When the standard input is already connected to a pipe (output of another 
program), there won't be any waiting (hence no timeout, thus making this option 
redundant).
+When the standard input is already connected to a pipe (output of another 
program), there will not be any waiting (hence no timeout, thus making this 
option redundant).
 
 If the first line-break (for example with the @key{ENTER} key) is not provided 
before the timeout, the program will abort with an error that no input was 
given.
 Note that this time interval is @emph{only} for the first line that you type.
@@ -9737,7 +9735,7 @@ You need to specify the ending of the standard input, for 
example by pressing @k
 
 Note that any input you write/type into a program on the command-line with 
Standard input will be discarded (lost) once the program is finished.
 It is only recoverable manually from your command-line (where you actually 
typed) as long as the terminal is open.
-So only use this feature when you are sure that you don't need the dataset (or 
have a copy of it somewhere else).
+So only use this feature when you are sure that you do not need the dataset 
(or have a copy of it somewhere else).
 
 
 @cindex HDU
@@ -9756,7 +9754,7 @@ A @code{#} is appended to the string you specify for the 
HDU@footnote{With the @
 
 @item -s STR
 @itemx --searchin=STR
-Where to match/search for columns when the column identifier wasn't a number, 
see @ref{Selecting table columns}.
+Where to match/search for columns when the column identifier was not a number, 
see @ref{Selecting table columns}.
 The acceptable values are @command{name}, @command{unit}, or @command{comment}.
 This option is only relevant for programs that take table columns as input.
 
@@ -9775,7 +9773,7 @@ The name of the output file or directory. With this 
option the automatic output
 @item -T STR
 @itemx --type=STR
 The data type of the output depending on the program context.
-This option isn't applicable to some programs like @ref{Fits} and will be 
ignored by them.
+This option is not applicable to some programs like @ref{Fits} and will be 
ignored by them.
 The different acceptable values to this option are fully described in 
@ref{Numeric data types}.
 
 @item -D
@@ -9785,7 +9783,7 @@ When this option is activated, if the output file already 
exists, the programs w
 
 @item -K
 @itemx --keepinputdir
-In automatic output names, don't remove the directory information of the input 
file names.
+In automatic output names, do not remove the directory information of the 
input file names.
 As explained in @ref{Automatic output}, if no output name is specified (with 
@option{--output}), then the output name will be made in the existing directory 
based on your input's file name (ignoring the directory of the input).
 If you call this option, the directory information of the input will be kept 
and the automatically generated output name will be in the same directory as 
the input (usually with a suffix added).
 Note that his is only relevant if you are running the program in a different 
directory than the input data.
@@ -9808,7 +9806,7 @@ For more on the different formalisms, please see Section 
8.1 of the FITS standar
 In short, in the @code{PCi_j} formalism, we only keep the linear rotation 
matrix in these keywords and put the scaling factor (or the pixel scale in 
astronomical imaging) in the @code{CDELTi} keywords.
 In the @code{CDi_j} formalism, we blend the scaling into the rotation into a 
single matrix and keep that matrix in these FITS keywords.
 By default, Gnuastro uses the @code{PCi_j} formalism, because it greatly helps 
in human readability of the raw keywords and is also the default mode of WCSLIB.
-However, in some circumstances it may be necessary to have the keywords in the 
CD format; for example when you need to feed the outputs into other software 
that don't follow the full FITS standard and only recognize the @code{CDi_j} 
formalism.
+However, in some circumstances it may be necessary to have the keywords in the 
CD format; for example when you need to feed the outputs into other software 
that do not follow the full FITS standard and only recognize the @code{CDi_j} 
formalism.
 
 @table @command
 @item txt
@@ -9853,7 +9851,7 @@ If the kernel capacity is exceeded, the program will 
crash.
 @end cartouche
 
 @item --quietmmap
-Don't print any message when an array is stored in non-volatile memory
+Do Not print any message when an array is stored in non-volatile memory
 (HDD/SSD) and not RAM, see the description of @option{--minmapsize} (above)
 for more.
 
@@ -9899,7 +9897,7 @@ Hence, in such cases, the image proportions are going to 
be slightly different w
 If your input image is not exactly divisible by the tile size and you want one 
value per tile for some higher-level processing, all is not lost though.
 You can see how many pixels were within each tile (for example to weight the 
values or discard some for later processing) with Gnuastro's Statistics (see 
@ref{Statistics}) as shown below.
 The output FITS file is going to have two extensions, one with the median 
calculated on each tile and one with the number of elements that each tile 
covers.
-You can then use the @code{where} operator in @ref{Arithmetic} to set the 
values of all tiles that don't have the regular area to a blank value.
+You can then use the @code{where} operator in @ref{Arithmetic} to set the 
values of all tiles that do not have the regular area to a blank value.
 
 @example
 $ aststatistics --median --number --ontile input.fits    \
@@ -9947,7 +9945,7 @@ The explanation for those that are not only limited to 
Gnuastro but are common t
 (GNU option) Stop parsing the command-line.
 This option can be useful in scripts or when using the shell history.
 Suppose you have a long list of options, and want to see if removing some of 
them (to read from configuration files, see @ref{Configuration files}) can give 
a better result.
-If the ones you want to remove are the last ones on the command-line, you 
don't have to delete them, you can just add @option{--} before them and if you 
don't get what you want, you can remove the @option{--} and get the same 
initial result.
+If the ones you want to remove are the last ones on the command-line, you do 
not have to delete them, you can just add @option{--} before them and if you do 
not get what you want, you can remove the @option{--} and get the same initial 
result.
 
 @item --usage
 (GNU option) Only print the options and arguments and abort.
@@ -9969,7 +9967,7 @@ The program will not run.
 
 @item -q
 @itemx --quiet
-Don't report steps.
+Do Not report steps.
 All the programs in Gnuastro that have multiple major steps will report their 
steps for you to follow while they are operating.
 If you do not want to see these reports, you can call this option and only 
error/warning messages will be printed.
 If the steps are done very fast (depending on the properties of your input) 
disabling these reports will also decrease running time.
@@ -10005,7 +10003,7 @@ The otherwise mandatory arguments for each program (for 
example input image or c
 Parse @option{STR} as a configuration file name, immediately when this option 
is confronted (see @ref{Configuration files}).
 The @option{--config} option can be called multiple times in one run of any 
Gnuastro program on the command-line or in the configuration files.
 In any case, it will be immediately read (before parsing the rest of the 
options on the command-line, or lines in a configuration file).
-If the given file doesn't exist or can't be read for any reason, the program 
will print a warning and continue its processing.
+If the given file does not exist or cannot be read for any reason, the program 
will print a warning and continue its processing.
 The warning can be suppressed with @option{--quiet}.
 
 Note that by definition, options on the command-line still take precedence 
over those in any configuration file, including the file(s) given to this 
option if they are called before it.
@@ -10024,7 +10022,7 @@ This is a very good option to confirm where the value 
of each option is has been
 @itemx --setdirconf
 Update the current directory configuration file for the Gnuastro program and 
quit.
 The full set of command-line and configuration file options will be parsed and 
options with a value will be written in the current directory configuration 
file for this program (see @ref{Configuration files}).
-If the configuration file or its directory doesn't exist, it will be created.
+If the configuration file or its directory does not exist, it will be created.
 If a configuration file exists it will be replaced (after it, and all other 
configuration files have been read).
 In any case, the program will not run.
 
@@ -10044,7 +10042,7 @@ See explanation under @option{--setdirconf} for more 
details.
 @item --lastconfig
 This is the last configuration file that must be read.
 When this option is confronted in any stage of reading the options (on the 
command-line or in a configuration file), no other configuration file will be 
parsed, see @ref{Configuration file precedence} and @ref{Current directory and 
User wide}.
-Like all on/off options, on the command-line, this option doesn't take any 
values.
+Like all on/off options, on the command-line, this option does not take any 
values.
 But in a configuration file, it takes the values of @option{0} or @option{1}, 
see @ref{Configuration file format}.
 If it is present in a configuration file with a value of @option{0}, then all 
later occurrences of this option will be ignored.
 
@@ -10059,7 +10057,7 @@ This is useful if you want your results to be exactly 
reproducible and not mista
 Besides internal algorithmic/behavior changes in programs, the existence of 
options or their names might change between versions (especially in these 
earlier versions of Gnuastro).
 
 Hence, when using this option (probably in a script or in a configuration 
file), be sure to call it before other options.
-The benefit is that, when the version differs, the other options won't be 
parsed and you, or your collaborators/users, won't get errors saying an option 
in your configuration doesn't exist in the running version of the program.
+The benefit is that, when the version differs, the other options will not be 
parsed and you, or your collaborators/users, will not get errors saying an 
option in your configuration does not exist in the running version of the 
program.
 
 Here is one example of how this option can be used in conjunction with the 
@option{--lastconfig} option.
 Let's assume that you were satisfied with the results of this command: 
@command{astnoisechisel image.fits --snquant=0.95} (along with various options 
set in various configuration files).
@@ -10072,11 +10070,11 @@ $ astnoisechisel image.fits --snquant=0.95 -P         
   \
                                      >> reproducible.conf
 @end example
 
-@option{--onlyversion} was available from Gnuastro 0.0, so putting it 
immediately at the start of a configuration file will ensure that later, you 
(or others using different version) won't get a non-recognized option error in 
case an option was added/removed.
+@option{--onlyversion} was available from Gnuastro 0.0, so putting it 
immediately at the start of a configuration file will ensure that later, you 
(or others using different version) will not get a non-recognized option error 
in case an option was added/removed.
 @option{--lastconfig} will inform the installed NoiseChisel to not parse any 
other configuration files.
-This is done because we don't want the user's user-wide or system wide option 
values affecting our results.
+This is done because we do not want the user's user-wide or system wide option 
values affecting our results.
 Finally, with the third command, which has a @option{-P} (short for 
@option{--printparams}), NoiseChisel will print all the option values visible 
to it (in all the configuration files) and the shell will append them to 
@file{reproduce.conf}.
-Hence, you don't have to worry about remembering the (possibly) different 
options in the different configuration files.
+Hence, you do not have to worry about remembering the (possibly) different 
options in the different configuration files.
 
 Afterwards, if you run NoiseChisel as shown below (telling it to read this 
configuration file with the @file{--config} option).
 You can be sure that there will either be an error (for version mismatch) or 
it will produce exactly the same result that you got before.
@@ -10088,11 +10086,11 @@ $ astnoisechisel --config=reproducible.conf
 @item --log
 Some programs can generate extra information about their outputs in a log file.
 When this option is called in those programs, the log file will also be 
printed.
-If the program doesn't generate a log file, this option is ignored.
+If the program does not generate a log file, this option is ignored.
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: The log file usually has a fixed 
name.
+@strong{@option{--log} is not thread-safe}: The log file usually has a fixed 
name.
 Therefore if two simultaneous calls (with @option{--log}) of a program are 
made in the same directory, the program will try to write to he same file.
 This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 @end cartouche
@@ -10147,7 +10145,7 @@ As another example, the option @option{--numthreads} 
option (to specify the numb
 To activate Gnuastro's custom TAB completion in Bash, you need to put the 
following line in one of your Bash startup files (for example @file{~/.bashrc}).
 If you installed Gnuastro using the steps of @ref{Quick start}, you should 
have already done this (the command just after @command{sudo make install}).
 For a list of (and discussion on) Bash startup files and installation 
directories see @ref{Installation directory}.
-Of course, if Gnuastro was installed in a custom location, replace the 
`@file{/usr/local}' part of the line below to the value that was given to 
@option{--prefix} during Gnuastro's configuration@footnote{In case you don't 
know the installation directory of Gnuastro on your system, you can find out 
with this command: @code{which astfits | sed -e"s|/bin/astfits||"}}.
+Of course, if Gnuastro was installed in a custom location, replace the 
`@file{/usr/local}' part of the line below to the value that was given to 
@option{--prefix} during Gnuastro's configuration@footnote{In case you do not 
know the installation directory of Gnuastro on your system, you can find out 
with this command: @code{which astfits | sed -e"s|/bin/astfits||"}}.
 
 @example
 # Enable Gnuastro's TAB completion
@@ -10162,7 +10160,7 @@ The first will only suggest the files with a table, and 
the second, only those w
 @noindent
 @strong{TAB completion only works with long option names:}
 As described above, short options are much more complex to generalize, 
therefore TAB completion is only available for long options.
-But don't worry!
+But do not worry!
 TAB completion also involves option names, so if you just type 
@option{--a[TAB][TAB]}, you will get the list of options that start with an 
@option{--a}.
 Therefore as a side-effect of TAB completion, your commands will be far more 
human-readable with minimal key strokes.
 @end cartouche
@@ -10220,7 +10218,7 @@ If data is present in the standard input stream, it is 
used.
 When the standard input is empty, the program will wait 
@option{--stdintimeout} micro-seconds for you to manually enter the first line 
(ending with a new-line character, or the @key{ENTER} key, see @ref{Input 
output options}).
 If it detects the first line in this time, there is no more time limit, and 
you can manually write/type all the lines for as long as it takes.
 To inform the program that Standard input has finished, press @key{CTRL-D} 
after a new line.
-If the program doesn't catch the first line before the time-out finishes, it 
will abort with an error saying that no input was provided.
+If the program does not catch the first line before the time-out finishes, it 
will abort with an error saying that no input was provided.
 
 @cartouche
 @noindent
@@ -10260,8 +10258,8 @@ The thing to have in mind is that none of the programs 
in Gnuastro keep any inte
 All the values must either be stored in one of the configuration files or 
explicitly called in the command-line.
 In case the necessary parameters are not given through any of these methods, 
the program will print a missing option error and abort.
 The only exception to this is @option{--numthreads}, whose default value is 
determined at run-time using the number of threads available to your system, 
see @ref{Multi-threaded operations}.
-Of course, you can still provide a default value for the number of threads at 
any of the levels below, but if you don't, the program will not abort.
-Also note that through automatic output name generation, the value to the 
@option{--output} option is also not mandatory on the command-line or in the 
configuration files for all programs which don't rely on that value as an 
input@footnote{One example of a program which uses the value given to 
@option{--output} as an input is ConvertType, this value specifies the type of 
the output through the value to @option{--output}, see @ref{Invoking 
astconvertt}.}, see @ref{Automatic output}.
+Of course, you can still provide a default value for the number of threads at 
any of the levels below, but if you do not, the program will not abort.
+Also note that through automatic output name generation, the value to the 
@option{--output} option is also not mandatory on the command-line or in the 
configuration files for all programs which do not rely on that value as an 
input@footnote{One example of a program which uses the value given to 
@option{--output} as an input is ConvertType, this value specifies the type of 
the output through the value to @option{--output}, see @ref{Invoking 
astconvertt}.}, see @ref{Automatic output}.
 
 
 
@@ -10360,13 +10358,13 @@ Recall that the file given to @option{--config} is 
parsed immediately when this
 
 @item --lastconfig
 Allows you to stop the parsing of subsequent configuration files.
-Note that if this option is given in a configuration file, it will be fully 
read, so its position in the configuration doesn't matter (unlike 
@option{--config}).
+Note that if this option is given in a configuration file, it will be fully 
read, so its position in the configuration does not matter (unlike 
@option{--config}).
 @end table
 @end cartouche
 
 One example of benefiting from these configuration files can be this: raw 
telescope images usually have their main image extension in the second FITS 
extension, while processed FITS images usually only have one extension.
 If your system-wide default input extension is 0 (the first), then when you 
want to work with the former group of data you have to explicitly mention it to 
the programs every time.
-With this progressive state of default values to check, you can set different 
default values for the different directories that you would like to run 
Gnuastro in for your different purposes, so you won't have to worry about this 
issue any more.
+With this progressive state of default values to check, you can set different 
default values for the different directories that you would like to run 
Gnuastro in for your different purposes, so you will not have to worry about 
this issue any more.
 
 The same can be said about the @file{gnuastro.conf} files: by specifying a 
behavior in this single file, all Gnuastro programs in the respective 
directory, user, or system-wide steps will behave similarly.
 For example to keep the input's directory when no specific output is given 
(see @ref{Automatic output}), or to not delete an existing file if it has the 
same name as a given output (see @ref{Input output options}).
@@ -10403,11 +10401,11 @@ For more details on @file{prefix}, see 
@ref{Installation directory} (by default
 This directory is the final place (with the lowest priority) that the programs 
in Gnuastro will check to retrieve parameter values.
 
 If you remove an option and its value from the system wide configuration 
files, you either have to specify it in more immediate configuration files or 
set it each time in the command-line.
-Recall that none of the programs in Gnuastro keep any internal default values 
and will abort if they don't find a value for the necessary parameters (except 
the number of threads and output file name).
-So even though you might never expect to use an optional option, it safe to 
have it available in this system-wide configuration file even if you don't 
intend to use it frequently.
+Recall that none of the programs in Gnuastro keep any internal default values 
and will abort if they do not find a value for the necessary parameters (except 
the number of threads and output file name).
+So even though you might never expect to use an optional option, it safe to 
have it available in this system-wide configuration file even if you do not 
intend to use it frequently.
 
 Note that in case you install Gnuastro from your distribution's repositories, 
@file{prefix} will either be set to @file{/} (the root directory) or 
@file{/usr}, so you can find the system wide configuration variables in 
@file{/etc/} or @file{/usr/etc/}.
-The prefix of @file{/usr/local/} is conventionally used for programs you 
install from source by your self as in @ref{Quick start}.
+The prefix of @file{/usr/local/} is conventionally used for programs you 
install from source by yourself as in @ref{Quick start}.
 
 
 
@@ -10440,7 +10438,7 @@ With this type of help, you can resume your exciting 
research without taking you
 @cindex Installed help methods
 Another major advantage of such command-line based help routines is that they 
are installed with the software in your computer, therefore they are always in 
sync with the executable you are actually running.
 Three of them are actually part of the executable.
-You don't have to worry about the version of the book or program.
+You do not have to worry about the version of the book or program.
 If you rely on external help (a PDF in your personal print or digital archive 
or HTML from the official web page) you have to check to see if their versions 
fit with your installed program.
 
 If you only need to remember the short or long names of the options, 
@option{--usage} is advised.
@@ -10480,7 +10478,7 @@ Usage: astcrop [-Do?IPqSVW] [-d INT] [-h INT] [-r INT] 
[-w INT]
 @end example
 
 There are no explanations on the options, just their short and long names 
shown separately.
-After the program name, the short format of all the options that don't require 
a value (on/off options) is displayed.
+After the program name, the short format of all the options that do not 
require a value (on/off options) is displayed.
 Those that do require a value then follow in separate brackets, each 
displaying the format of the input they want, see @ref{Options}.
 Since all options are optional, they are shown in square brackets, but 
arguments can also be optional.
 For example in this example, a catalog name is optional and is only required 
in some modes.
@@ -10515,7 +10513,7 @@ If you are using a graphical user interface terminal 
emulator, you can scroll th
 @cindex @key{Shift + PageUP} and @key{Shift + PageDown}
 @key{Shift + PageUP} to scroll up and @key{Shift + PageDown} to scroll down.
 For most help output this should be enough.
-The problem is that it is limited by the number of lines that your terminal 
keeps in memory and that you can't scroll by lines, only by whole screens.
+The problem is that it is limited by the number of lines that your terminal 
keeps in memory and that you cannot scroll by lines, only by whole screens.
 
 @item
 @cindex Pipe
@@ -10546,7 +10544,7 @@ $ astnoisechisel --help > filename.txt
 @cindex GNU Grep
 @cindex Searching text
 @cindex Command-line searching text
-In case you have a special keyword you are looking for in the help, you don't 
have to go through the full list.
+In case you have a special keyword you are looking for in the help, you do not 
have to go through the full list.
 GNU Grep is made for this job.
 For example if you only want the list of options whose @option{--help} output 
contains the word ``axis'' in Crop, you can run the following command:
 
@@ -10611,7 +10609,7 @@ $ info executablename
 @cindex Learning GNU Info
 @cindex GNU software documentation
 In case you are not already familiar with it, run @command{$ info info}.
-It does a fantastic job in explaining all its capabilities its self.
+It does a fantastic job in explaining all its capabilities itself.
 It is very short and you will become sufficiently fluent in about half an hour.
 Since all GNU software documentation is also provided in Info, your whole 
GNU/Linux life will significantly improve.
 
@@ -10663,7 +10661,7 @@ However, when contacting this mailing list please have 
in mind that they are pos
 To ask a question from this mailing list, send a mail to 
@code{help-gnuastro@@gnu.org}.
 Anyone can view the mailing list archives at 
@url{http://lists.gnu.org/archive/html/help-gnuastro/}.
 It is best that before sending a mail, you search the archives to see if 
anyone has asked a question similar to yours.
-If you want to make a suggestion or report a bug, please don't send a mail to 
this mailing list.
+If you want to make a suggestion or report a bug, please do not send a mail to 
this mailing list.
 We have other mailing lists and tools for those purposes, see @ref{Report a 
bug} or @ref{Suggest new feature}.
 
 
@@ -10693,7 +10691,7 @@ On a GNU/Linux system, the number of threads available 
can be found with the com
 @cindex Internally stored option value
 Gnuastro's programs can find the number of threads available to your system 
internally at run-time (when you execute the program).
 However, if a value is given to the @option{--numthreads} option, the given 
number will be used, see @ref{Operating mode options} and @ref{Configuration 
files} for ways to use this option.
-Thus @option{--numthreads} is the only common option in Gnuastro's programs 
with a value that doesn't have to be specified anywhere on the command-line or 
in the configuration files.
+Thus @option{--numthreads} is the only common option in Gnuastro's programs 
with a value that does not have to be specified anywhere on the command-line or 
in the configuration files.
 
 @menu
 * A note on threads::           Caution and suggestion on using threads.
@@ -10707,7 +10705,7 @@ Thus @option{--numthreads} is the only common option in 
Gnuastro's programs with
 @cindex Best use of CPU threads
 @cindex Efficient use of CPU threads
 Spinning off threads is not necessarily the most efficient way to run an 
application.
-Creating a new thread isn't a cheap operation for the operating system.
+Creating a new thread is not a cheap operation for the operating system.
 It is most useful when the input data are fixed and you want the same 
operation to be done on parts of it.
 For example one input image to Crop and multiple crops from various parts of 
it.
 In this fashion, the image is loaded into memory once, all the crops are 
divided between the number of threads internally and each thread cuts out those 
parts which are assigned to it from the same image.
@@ -10716,7 +10714,7 @@ On the other hand, if you have multiple images and you 
want to crop the same reg
 @cindex Wall-clock time
 You can check the boost in speed by first running a program on one of the data 
sets with the maximum number of threads and another time (with everything else 
the same) and only using one thread.
 You will notice that the wall-clock time (reported by most programs at their 
end) in the former is longer than the latter divided by number of physical CPU 
cores (not threads) available to your operating system.
-Asymptotically these two times can be equal (most of the time they aren't).
+Asymptotically these two times can be equal (most of the time they are not).
 So limiting the programs to use only one thread and running them independently 
on the number of available threads will be more efficient.
 
 @cindex System Cache
@@ -10838,7 +10836,7 @@ In the `signed' convention, one bit is reserved for the 
sign (stating that the i
 The `unsigned' integers use that bit in the actual number and thus contain 
only positive numbers (starting from zero).
 
 Therefore, at the same number of bits, both signed and unsigned integers can 
allow the same number of integers, but the positive limit of the 
@code{unsigned} types is double their @code{signed} counterparts with the same 
width (at the expense of not having negative numbers).
-When the context of your work doesn't involve negative numbers (for example 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
+When the context of your work does not involve negative numbers (for example 
counting, where negative is not defined), it is best to use the @code{unsigned} 
types.
 For the full numerical range of all integer types, see below.
 
 Another standard of converting a given number of bits to numbers is the 
floating point standard, this standard can @emph{approximately} store any real 
number with a given precision.
@@ -10921,7 +10919,7 @@ This is usually good for processing (mixing) the data 
internally, for example a
 
 @cartouche
 @noindent
-@strong{Some file formats don't recognize all types.} For example the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
+@strong{Some file formats do not recognize all types.} For example the FITS 
standard (see @ref{Fits}) does not define @code{uint64} in binary tables or 
images.
 When a type is not acceptable for output into a given file format, the 
respective Gnuastro program or library will let you know and abort.
 On the command-line, you can convert the numerical type of an image, or table 
column into another type with @ref{Arithmetic} or @ref{Table} respectively.
 If you are writing your own program, you can use the 
@code{gal_data_copy_to_new_type()} function in Gnuastro's library, see 
@ref{Copying datasets}.
@@ -10935,12 +10933,12 @@ If you are writing your own program, you can use the 
@code{gal_data_copy_to_new_
 @cindex Memory management
 @cindex Non-volatile memory
 @cindex Memory, non-volatile
-In this section we'll review how Gnuastro manages your input data in your 
system's memory.
+In this section we will review how Gnuastro manages your input data in your 
system's memory.
 Knowing this can help you optimize your usage (in speed and memory 
consumption) when the data volume is large and approaches, or exceeds, your 
available RAM (usually in various calls to multiple programs simultaneously).
 But before diving into the details, let's have a short basic introduction to 
memory in general and in particular the types of memory most relevant to this 
discussion.
 
 Input datasets (that are later fed into programs for analysis) are commonly 
first stored in @emph{non-volatile memory}.
-This is a type of memory that doesn't need a constant power supply to keep the 
data and is therefore primarily aimed for long-term storage, like HDDs or SSDs.
+This is a type of memory that does not need a constant power supply to keep 
the data and is therefore primarily aimed for long-term storage, like HDDs or 
SSDs.
 So data in this type of storage is preserved when you turn off your computer.
 But by its nature, non-volatile memory is much slower, in reading or writing, 
than the speeds that CPUs can process the data.
 Thus relying on this type of memory alone would create a bad bottleneck in the 
input/output (I/O) phase of any processing.
@@ -10967,8 +10965,8 @@ This allows Gnuastro programs to minimize their usage 
of your system's RAM over
 
 The situation gets complicated when the datasets are large (compared to your 
available RAM when the program is run).
 For example if a dataset is half the size of your system's available RAM, and 
the program's internal analysis needs three or more intermediately processed 
copies of it at one moment in its analysis.
-There won't be enough RAM to keep those higher-level intermediate data.
-In such cases, programs that don't do any memory management will crash.
+There will not be enough RAM to keep those higher-level intermediate data.
+In such cases, programs that do not do any memory management will crash.
 But fortunately Gnuastro's programs do have a memory management plans for such 
situations.
 
 @cindex Memory-mapped file
@@ -11008,8 +11006,8 @@ In the event of a crash, the memory-mapped files will 
not be deleted and you hav
 This brings us to managing the memory-mapped files in your non-volatile memory.
 In other words: knowing where they are saved, or intentionally placing them in 
different places of your file system, or deleting them when necessary.
 As the examples above show, memory-mapped files are stored in a sub-directory 
of the running directory called @file{gnuastro_mmap}.
-If this directory doesn't exist, Gnuastro will automatically create it when 
memory mapping becomes necessary.
-Alternatively, it may happen that the @file{gnuastro_mmap} sub-directory 
exists and isn't writable, or it can't be created.
+If this directory does not exist, Gnuastro will automatically create it when 
memory mapping becomes necessary.
+Alternatively, it may happen that the @file{gnuastro_mmap} sub-directory 
exists and is not writable, or it cannot be created.
 In such cases, the memory-mapped file for each dataset will be created in the 
running directory with a @file{gnuastro_mmap_} prefix.
 
 Therefore one easy way to delete all memory-mapped files in case of a crash, 
is to delete everything within the sub-directory (first command below), or all 
files stating with this prefix:
@@ -11033,24 +11031,24 @@ All you have to do is to run the following command 
before your Gnuastro analysis
 ln -s /path/to/dir/for/mmap gnuastro_mmap
 @end example
 
-The programs will delete a memory-mapped file when it is no longer needed, but 
they won't delete the @file{gnuastro_mmap} directory that hosts them.
+The programs will delete a memory-mapped file when it is no longer needed, but 
they will not delete the @file{gnuastro_mmap} directory that hosts them.
 So if your project involves many Gnuastro programs (possibly called in 
parallel) and you want your memory-mapped files to be in a different location, 
you just have to make the symbolic link above once at the start, and all the 
programs will use it if necessary.
 
-Another memory-management scenario that may happen is this: you don't want a 
Gnuastro program to allocate internal datasets in the RAM at all.
-For example the speed of your Gnuastro-related project doesn't matter at that 
moment, and you have higher-priority jobs that are being run at the same time 
which need to have RAM available.
+Another memory-management scenario that may happen is this: you do not want a 
Gnuastro program to allocate internal datasets in the RAM at all.
+For example the speed of your Gnuastro-related project does not matter at that 
moment, and you have higher-priority jobs that are being run at the same time 
which need to have RAM available.
 In such cases, you can use the @option{--minmapsize} option that is available 
in all Gnuastro programs (see @ref{Processing options}).
 Any intermediate dataset that has a size larger than the value of this option 
will be memory-mapped, even if there is space available in your RAM.
 For example if you want any dataset larger than 100 megabytes to be 
memory-mapped, use @option{--minmapsize=100000000} (8 zeros!).
 
 @cindex Linux kernel
 @cindex Kernel, Linux
-You shouldn't set the value of @option{--minmapsize} to be too small, 
otherwise even small intermediate values (that are usually very numerous) in 
the program will be memory-mapped.
+You should not set the value of @option{--minmapsize} to be too small, 
otherwise even small intermediate values (that are usually very numerous) in 
the program will be memory-mapped.
 However the kernel can only host a limited number of memory-mapped files at 
every moment (by all running programs combined).
 For example in the default@footnote{If you need to host more memory-mapped 
files at one moment, you need to build your own customized Linux kernel.} Linux 
kernel on GNU/Linux operating systems this limit is roughly 64000.
 If the total number of memory-mapped files exceeds this number, all the 
programs using them will crash.
 Gnuastro's programs will warn you if your given value is too small and may 
cause a problem later.
 
-Actually, the default behavior for Gnuastro's programs (to only use 
memory-mapped files when there isn't enough RAM) is a side-effect of 
@option{--minmapsize}.
+Actually, the default behavior for Gnuastro's programs (to only use 
memory-mapped files when there is not enough RAM) is a side-effect of 
@option{--minmapsize}.
 The pre-defined value to this option is an extremely large value in the 
lowest-level Gnuastro configuration file (the installed @file{gnuastro.conf} 
described in @ref{Configuration file precedence}).
 This value is larger than the largest possible available RAM.
 You can check by running any Gnuastro program with a @option{-P} option.
@@ -11115,14 +11113,14 @@ It is fully described in @ref{Gnuastro text table 
format}.
 @item FITS ASCII tables
 The FITS ASCII table extension is fully in ASCII encoding and thus easily 
readable on any text editor (assuming it is the only extension in the FITS 
file).
 If the FITS file also contains binary extensions (for example an image or 
binary table extensions), then there will be many hard to print characters.
-The FITS ASCII format doesn't have new line characters to separate rows.
+The FITS ASCII format does not have new line characters to separate rows.
 In the FITS ASCII table standard, each row is defined as a fixed number of 
characters (value to the @code{NAXIS1} keyword), so to visually inspect it 
properly, you would have to adjust your text editor's width to this value.
 All columns start at given character positions and have a fixed width (number 
of characters).
 
 Numbers in a FITS ASCII table are printed into ASCII format, they are not in 
binary (that the CPU uses).
 Hence, they can take a larger space in memory, loose their precision, and take 
longer to read into memory.
 If you are dealing with integer type columns (see @ref{Numeric data types}), 
another issue with FITS ASCII tables is that the type information for the 
column will be lost (there is only one integer type in FITS ASCII tables).
-One problem with the binary format on the other hand is that it isn't portable 
(different CPUs/compilers) have different standards for translating the zeros 
and ones.
+One problem with the binary format on the other hand is that it is not 
portable (different CPUs/compilers) have different standards for translating 
the zeros and ones.
 But since ASCII characters are defined on a byte and are well recognized, they 
are better for portability on those various systems.
 Gnuastro's plain text table format described below is much more portable and 
easier to read/write/interpret by humans manually.
 
@@ -11166,7 +11164,7 @@ So when you view it on a text editor, every row will 
occupy one line.
 The delimiters (or characters separating the columns) are white space 
characters (space, horizontal tab, vertical tab) and a comma (@key{,}).
 The only further requirement is that all rows/lines must have the same number 
of columns.
 
-The columns don't have to be exactly under each other and the rows can be 
arbitrarily long with different lengths.
+The columns do not have to be exactly under each other and the rows can be 
arbitrarily long with different lengths.
 For example the following contents in a file would be interpreted as a table 
with 4 columns and 2 rows, with each element interpreted as a @code{double} 
type (see @ref{Numeric data types}).
 
 @example
@@ -11180,16 +11178,16 @@ Also, when you want to select columns, you have to 
count their position within t
 This can become frustrating and prone to bad errors (getting the columns 
wrong) especially as the number of columns increase.
 It is also bad for sending to a colleague, because they will find it hard to 
remember/use the columns properly.
 
-To solve these problems in Gnuastro's programs/libraries you aren't limited to 
using the column's number, see @ref{Selecting table columns}.
+To solve these problems in Gnuastro's programs/libraries you are not limited 
to using the column's number, see @ref{Selecting table columns}.
 If the columns have names, units, or comments you can also select your columns 
based on searches/matches in these fields, for example see @ref{Table}.
-Also, in this manner, you can't guide the program reading the table on how to 
read the numbers.
+Also, in this manner, you cannot guide the program reading the table on how to 
read the numbers.
 As an example, the first and third columns above can be read as integer types: 
the first column might be an ID and the third can be the number of pixels an 
object occupies in an image.
 So there is no need to read these to columns as a @code{double} type (which 
takes more memory, and is slower).
 
-In the bare-minimum example above, you also can't use strings of characters, 
for example the names of filters, or some other identifier that includes 
non-numerical characters.
+In the bare-minimum example above, you also cannot use strings of characters, 
for example the names of filters, or some other identifier that includes 
non-numerical characters.
 In the absence of any information, only numbers can be read robustly.
 Assuming we read columns with non-numerical characters as string, there would 
still be the problem that the strings might contain space (or any delimiter) 
character for some rows.
-So, each `word' in the string will be interpreted as a column and the program 
will abort with an error that the rows don't have the same number of columns.
+So, each `word' in the string will be interpreted as a column and the program 
will abort with an error that the rows do not have the same number of columns.
 
 To correct for these limitations, Gnuastro defines the following convention 
for storing the table meta-data along with the raw data in one plain text file.
 The format is primarily designed for ease of reading/writing by eye/fingers, 
but is also structured enough to be read by a program.
@@ -11197,7 +11195,7 @@ The format is primarily designed for ease of 
reading/writing by eye/fingers, but
 When the first non-white character in a line is @key{#}, or there are no 
non-white characters in it, then the line will not be considered as a row of 
data in the table (this is a pretty standard convention in many programs, and 
higher level languages).
 In the former case, the line is interpreted as a @emph{comment}.
 If the comment line starts with `@code{# Column N:}', then it is assumed to 
contain information about column @code{N} (a number, counting from 1).
-Comment lines that don't start with this pattern are ignored and you can use 
them to include any further information you want to store with the table in the 
text file.
+Comment lines that do not start with this pattern are ignored and you can use 
them to include any further information you want to store with the table in the 
text file.
 A column information comment is assumed to have the following format:
 
 @example
@@ -11209,7 +11207,7 @@ A column information comment is assumed to have the 
following format:
 Any sequence of characters between `@key{:}' and `@key{[}' will be interpreted 
as the column name (so it can contain anything except the `@key{[}' character).
 Anything between the `@key{]}' and the end of the line is defined as a comment.
 Within the brackets, anything before the first `@key{,}' is the units 
(physical units, for example km/s, or erg/s), anything before the second 
`@key{,}' is the short type identifier (see below, and @ref{Numeric data 
types}).
-If the type identifier isn't recognized, the default 64-bit floating point 
type will be used.
+If the type identifier is not recognized, the default 64-bit floating point 
type will be used.
 
 Finally (still within the brackets), any non-white characters after the second 
`@key{,}' are interpreted as the blank value for that column (see @ref{Blank 
pixels}).
 The blank value can either be in the same type as the column (for example 
@code{-99} for a signed integer column), or any string (for example @code{NaN} 
in that same column).
@@ -11228,14 +11226,14 @@ For example in this line:
 
 The @code{NAME} field will be `@code{column name}' and the @code{TYPE} field 
will be `@code{f32}'.
 Note how all the white space characters before and after strings are not used, 
but those in the middle remained.
-Also, white space characters aren't mandatory.
+Also, white space characters are not mandatory.
 Hence, in the example above, the @code{BLANK} field will be given the value of 
`@code{-99}'.
 
 Except for the column number (@code{N}), the rest of the fields are optional.
-Also, the column information comments don't have to be in order.
+Also, the column information comments do not have to be in order.
 In other words, the information for column @mymath{N+m} (@mymath{m>0}) can be 
given in a line before column @mymath{N}.
-Furthermore, you don't have to specify information for all columns.
-Those columns that don't have this information will be interpreted with the 
default settings (like the case above: values are double precision floating 
point, and the column has no name, unit, or comment).
+Furthermore, you do not have to specify information for all columns.
+Those columns that do not have this information will be interpreted with the 
default settings (like the case above: values are double precision floating 
point, and the column has no name, unit, or comment).
 So these lines are all acceptable for any table (the first one, with nothing 
but the column number is redundant):
 
 @example
@@ -11305,7 +11303,7 @@ These options accept values in integer (column number), 
or string (metadata matc
 
 If the value can be parsed as a positive integer, it will be seen as the 
low-level column number.
 Note that column counting starts from 1, so if you ask for column 0, the 
respective program will abort with an error.
-When the value can't be interpreted as an a integer number, it will be seen as 
a string of characters which will be used to match/search in the table's 
meta-data.
+When the value cannot be interpreted as an a integer number, it will be seen 
as a string of characters which will be used to match/search in the table's 
meta-data.
 The meta-data field which the value will be compared with can be selected 
through the @option{--searchin} option, see @ref{Input output options}.
 @option{--searchin} can take three values: @code{name}, @code{unit}, 
@code{comment}.
 The matching will be done following this convention:
@@ -11318,7 +11316,7 @@ Regular expressions are a very powerful tool in 
matching text and useful in many
 We thus strongly encourage reviewing this chapter for greatly improving the 
quality of your work in many cases, not just for searching column meta-data in 
Gnuastro.
 
 @item
-When the string isn't enclosed between `@key{/}'s, any column that exactly 
matches the given value in the given field will be selected.
+When the string is not enclosed between `@key{/}'s, any column that exactly 
matches the given value in the given field will be selected.
 @end itemize
 
 Note that in both cases, you can ignore the case of alphabetic characters with 
the @option{--ignorecase} option, see @ref{Input output options}.
@@ -11354,7 +11352,7 @@ The fraction which defines the remainder significance 
along all dimensions can b
 
 The best tile size is directly related to the spatial properties of the 
property you want to study (for example, gradient on the image).
 In practice we assume that the gradient is not present over each tile.
-So if there is a strong gradient (for example in long wavelength ground based 
images) or the image is of a crowded area where there isn't too much blank 
area, you have to choose a smaller tile size.
+So if there is a strong gradient (for example in long wavelength ground based 
images) or the image is of a crowded area where there is not too much blank 
area, you have to choose a smaller tile size.
 A larger mesh will give more pixels and so the scatter in the results will be 
less (better statistics).
 
 @cindex CCD
@@ -11377,7 +11375,7 @@ This results in the final reduced data to have 
non-uniform amplifier-shaped regi
 Such systematic biases will then propagate to all subsequent measurements we 
do on the data (for example photometry and subsequent stellar mass and star 
formation rate measurements in the case of galaxies).
 
 Therefore an accurate analysis requires a two layer tessellation: the top 
layer contains larger tiles, each covering one amplifier channel.
-For clarity we'll call these larger tiles ``channels''.
+For clarity we will call these larger tiles ``channels''.
 The number of channels along each dimension is defined through the 
@option{--numchannels}.
 Each channel is then covered by its own individual smaller tessellation (with 
tile sizes determined by the @option{--tilesize} option).
 This will allow independent analysis of two adjacent pixels from different 
channels if necessary.
@@ -11402,15 +11400,15 @@ For the full list of processing-related common 
options (including tessellation o
 @cindex Setting output file names automatically
 All the programs in Gnuastro are designed such that specifying an output file 
or directory (based on the program context) is optional.
 When no output name is explicitly given (with @option{--output}, see 
@ref{Input output options}), the programs will automatically set an output name 
based on the input name(s) and what the program does.
-For example when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and don't specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
-When the input is from the standard input (for example a pipe, see 
@ref{Standard input}), and @option{--output} isn't given, the output name will 
be the program's name (for example @file{converttype.jpg}).
+For example when you are using ConvertType to save FITS image named 
@file{dataset.fits} to a JPEG image and do not specify a name for it, the JPEG 
output file will be name @file{dataset.jpg}.
+When the input is from the standard input (for example a pipe, see 
@ref{Standard input}), and @option{--output} is not given, the output name will 
be the program's name (for example @file{converttype.jpg}).
 
 @vindex --keepinputdir
 Another very important part of the automatic output generation is that all the 
directory information of the input file name is stripped off of it.
 This feature can be disabled with the @option{--keepinputdir} option, see 
@ref{Input output options}.
 It is the default because astronomical data are usually very large and 
organized specially with special file names.
 In some cases, the user might not have write permissions in those 
directories@footnote{In fact, even if the data is stored on your own computer, 
it is advised to only grant write permissions to the super user or root.
-This way, you won't accidentally delete or modify your valuable data!}.
+This way, you will not accidentally delete or modify your valuable data!}.
 
 Let's assume that we are working on a report and want to process the FITS 
images from two projects (ABC and DEF), which are stored in the sub-directories 
named @file{ABCproject/} and @file{DEFproject/} of our top data directory 
(@file{/mnt/data}).
 The following shell commands show how one image from the former is first 
converted to a JPEG image through ConvertType and then the objects from an 
image in the latter project are detected using NoiseChisel.
@@ -11471,7 +11469,7 @@ $ astfits image_detected.fits -h0 | grep -i snquant
 @end itemize
 
 The keywords above are classified (separated by an empty line and title) as a 
group titled ``ProgramName configuration''.
-This meta-data extension, as well as all the other extensions (which contain 
data), also contain have final group of keywords to keep the basic date and 
version information of Gnuastro, its dependencies and the pipeline that is 
using Gnuastro (if its under version control).
+This meta-data extension, as well as all the other extensions (which contain 
data), also contain have final group of keywords to keep the basic date and 
version information of Gnuastro, its dependencies and the pipeline that is 
using Gnuastro (if it is under version control).
 
 @table @command
 
@@ -11481,7 +11479,7 @@ This date is written directly by CFITSIO and is in UT 
format.
 
 @item COMMIT
 Git's commit description from the running directory of Gnuastro's programs.
-If the running directory is not version controlled or @file{libgit2} isn't 
installed (see @ref{Optional dependencies}) then this keyword will not be 
present.
+If the running directory is not version controlled or @file{libgit2} is not 
installed (see @ref{Optional dependencies}) then this keyword will not be 
present.
 The printed value is equivalent to the output of the following command:
 
 @example
@@ -11544,7 +11542,7 @@ END
 @cindex @code{LC_NUMERIC}
 @cindex Decimal separator
 @cindex Language of command-line
-If your @url{https://en.wikipedia.org/wiki/Locale_(computer_software), system 
locale} isn't English, it may happen that the `.' is not used as the decimal 
separator of basic command-line tools for input or output.
+If your @url{https://en.wikipedia.org/wiki/Locale_(computer_software), system 
locale} is not English, it may happen that the `.' is not used as the decimal 
separator of basic command-line tools for input or output.
 For example in Spanish and some other languages the decimal separator (symbol 
used to separate the integer and fractional part of a number), is a comma.
 Therefore in such systems, some programs may print @mymath{0.5} as as 
`@code{0,5}' (instead of `@code{0.5}').
 This mainly happens in some core operating system tools like @command{awk} or 
@command{seq} depend on the locale.
@@ -11658,7 +11656,7 @@ Also see the 
@url{https://doi.org/10.1051/0004-6361/201015362, FITS 3.0 standard
 Many common image formats, for example a JPEG, only have one image/dataset per 
file, however one great advantage of the FITS standard is that it allows you to 
keep multiple datasets (images or tables along with their separate meta-data) 
in one file.
 In the FITS standard, each data + metadata is known as an extension, or more 
formally a header data unit or HDU.
 The HDUs in a file can be completely independent: you can have multiple images 
of different dimensions/sizes or tables as separate extensions in one file.
-However, while the standard doesn't impose any constraints on the relation 
between the datasets, it is strongly encouraged to group data that are 
contextually related with each other in one file.
+However, while the standard does not impose any constraints on the relation 
between the datasets, it is strongly encouraged to group data that are 
contextually related with each other in one file.
 For example an image and the table/catalog of objects and their measured 
properties in that image.
 Other examples can be images of one patch of sky in different colors 
(filters), or one raw telescope image along with its calibration data (tables 
or images).
 
@@ -11801,7 +11799,7 @@ Print the number of extensions/HDUs in the given file.
 Note that this option must be called alone and will only print a single number.
 It is thus useful in scripts, for example when you need to do check the number 
of extensions in a FITS file.
 
-For a complete list of basic meta-data on the extensions in a FITS file, don't 
use any of the options in this section or in @ref{Keyword inspection and 
manipulation}.
+For a complete list of basic meta-data on the extensions in a FITS file, do 
not use any of the options in this section or in @ref{Keyword inspection and 
manipulation}.
 For more, see @ref{Invoking astfits}.
 
 @item --hastablehdu
@@ -11809,7 +11807,7 @@ Print @code{1} (on standard output) if at least one 
table HDU (ASCII or binary)
 Otherwise (when no table HDU exists in the file), print @code{0}.
 
 @item --listtablehdus
-Print the names or numbers (when a name doesn't exist, counting from zero) of 
HDUs that contain a table (ASCII or Binary) on standard output, one per line.
+Print the names or numbers (when a name does not exist, counting from zero) of 
HDUs that contain a table (ASCII or Binary) on standard output, one per line.
 Otherwise (when no table HDU exists in the file) nothing will be printed.
 
 @item --hasimagehdu
@@ -11820,14 +11818,14 @@ In the FITS standard, any array with any dimensions 
is called an ``image'', ther
 However, an image HDU with zero dimensions (which is usually the first 
extension and only contains metadata) is not counted here.
 
 @item --listimagehdus
-Print the names or numbers (when a name doesn't exist, counting from zero) of 
HDUs that contain an image on standard output, one per line.
+Print the names or numbers (when a name does not exist, counting from zero) of 
HDUs that contain an image on standard output, one per line.
 Otherwise (when no image HDU exists in the file) nothing will be printed.
 
 In the FITS standard, any array with any dimensions is called an ``image'', 
therefore this option includes 1, 3 and 4 dimensional arrays too.
 However, an image HDU with zero dimensions (which is usually the first 
extension and only contains metadata) is not counted here.
 
 @item --listallhdus
-Print the names or numbers (when a name doesn't exist, counting from zero) of 
all HDUs within the input file on the standard output, one per line.
+Print the names or numbers (when a name does not exist, counting from zero) of 
all HDUs within the input file on the standard output, one per line.
 
 @item --pixelscale
 Print the HDU's pixel-scale (change in world coordinate for one pixel along 
each dimension) and pixel area or voxel volume.
@@ -11895,7 +11893,7 @@ This option ignores any possibly existing 
@code{DATASUM} keyword in the HDU.
 For more on @code{DATASUM} in the FITS standard, see @ref{Keyword inspection 
and manipulation} (under the @code{checksum} component of @option{--write}).
 
 You can use this option to confirm that the data in two different HDUs 
(possibly with different keywords) is identical.
-Its advantage over @option{--write=datasum} (which writes the @code{DATASUM} 
keyword into the given HDU) is that it doesn't require write permissions.
+Its advantage over @option{--write=datasum} (which writes the @code{DATASUM} 
keyword into the given HDU) is that it does not require write permissions.
 @end table
 
 The following options manipulate (move/delete) the HDUs in one FITS file or to 
another FITS file.
@@ -11926,20 +11924,20 @@ Remove the specified HDU from the input file.
 The first (zero-th) HDU cannot be removed with this option.
 Consider using @option{--copy} or @option{--cut} in combination with 
@option{primaryimghdu} to not have an empty zero-th HDU.
 From CFITSIO: ``In the case of deleting the primary array (the first HDU in 
the file) then [it] will be replaced by a null primary array containing the 
minimum set of required keywords and no data.''.
-So in practice, any existing data (array) and meta-data in the first extension 
will be removed, but the number of extensions in the file won't change.
+So in practice, any existing data (array) and meta-data in the first extension 
will be removed, but the number of extensions in the file will not change.
 This is because of the unique position the first FITS extension has in the 
FITS standard (for example it cannot be used to store tables).
 
 @item --primaryimghdu
-Copy or cut an image HDU to the zero-th HDU/extension a file that doesn't yet 
exist.
+Copy or cut an image HDU to the zero-th HDU/extension a file that does not yet 
exist.
 This option is thus irrelevant if the output file already exists or the 
copied/cut extension is a FITS table.
-For example with the commands below, first we make sure that @file{out.fits} 
doesn't exist, then we copy the first extension of @file{in.fits} to the 
zero-th extension of @file{out.fits}.
+For example with the commands below, first we make sure that @file{out.fits} 
does not exist, then we copy the first extension of @file{in.fits} to the 
zero-th extension of @file{out.fits}.
 
 @example
 $ rm -f out.fits
 $ astfits in.fits --copy=1 --primaryimghdu --output=out.fits
 @end example
 
-If we hadn't used @option{--primaryimghdu}, then the zero-th extension of 
@file{out.fits} would have no data, and its second extension would host the 
copied image (just like any other output of Gnuastro).
+If we had not used @option{--primaryimghdu}, then the zero-th extension of 
@file{out.fits} would have no data, and its second extension would host the 
copied image (just like any other output of Gnuastro).
 
 @end table
 
@@ -11979,7 +11977,7 @@ $ astfits image-a.fits --keyvalue=NAXIS,NAXIS1 \
 The output is internally stored (and finally printed) as a table (with one 
column per keyword).
 Therefore just like the Table program, you can use @option{--colinfoinstdout} 
to print the metadata like the example below (also see @ref{Invoking asttable}).
 The keyword metadata (comments and units) are extracted from the comments and 
units of the keyword in the input files (first file that has a comment or unit).
-Hence if the keyword doesn't have units or comments in any of the input files, 
they will be empty.
+Hence if the keyword does not have units or comments in any of the input 
files, they will be empty.
 For more on Gnuastro's plain-text metadata format, see @ref{Gnuastro text 
table format}.
 
 @example
@@ -12009,7 +12007,7 @@ image-b.fits
 @end example
 
 Note that @option{--colinfoinstdout} is necessary to use column names when 
piping to other programs (like @command{asttable} above).
-Also, with the @option{-cFILENAME} option, we are asking Table to only print 
the final file names (we don't need the sizes any more).
+Also, with the @option{-cFILENAME} option, we are asking Table to only print 
the final file names (we do not need the sizes any more).
 
 The commands with multiple files above used @file{*.fits}, which is only 
useful when all your FITS files are in the same directory.
 However, in many cases, your FITS files will be scattered in multiple 
sub-directories of a certain top-level directory, or you may only want those 
with more particular file name patterns.
@@ -12074,7 +12072,7 @@ First, all the delete operations are going to take 
effect then the update operat
 @end enumerate
 @noindent
 All possible syntax errors will be reported before the keywords are actually 
written.
-FITS errors during any of these actions will be reported, but Fits won't stop 
until all the operations are complete.
+FITS errors during any of these actions will be reported, but Fits will not 
stop until all the operations are complete.
 If @option{--quitonerror} is called, then Fits will immediately stop upon the 
first error.
 
 @cindex GNU Grep
@@ -12104,7 +12102,7 @@ The keyword related options to the Fits program are 
fully described below.
 Delete one instance of the @option{STR} keyword from the FITS header.
 Multiple instances of @option{--delete} can be given (possibly even for the 
same keyword, when its repeated in the meta-data).
 All keywords given will be removed from the headers in the same given order.
-If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+If the keyword does not exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
 To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -r STR,STR
@@ -12114,7 +12112,7 @@ Rename a keyword to a new value (for example 
@option{--rename=OLDNAME,NEWNAME}.
 Note that if you use a space character, you have to put the value to this 
option within double quotation marks (@key{"}) so the space character is not 
interpreted as an option separator.
 Multiple instances of @option{--rename} can be given in one command.
 The keywords will be renamed in the specified order.
-If the keyword doesn't exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
+If the keyword does not exist, Fits will give a warning and return with a 
non-zero value, but will not stop.
 To stop as soon as an error occurs, run with @option{--quitonerror}.
 
 @item -u STR
@@ -12175,7 +12173,7 @@ END
 @end example
 
 Adding a ``title'' before each contextually separate group of header keywords 
greatly helps in readability and visual inspection of the keywords.
-So generally, when you want to add new FITS keywords, its good practice to 
also add a title before them.
+So generally, when you want to add new FITS keywords, it is good practice to 
also add a title before them.
 
 The reason you need to use @key{/} as the keyword name for setting a title is 
that @key{/} is the first non-white character.
 
@@ -12199,9 +12197,9 @@ $ astfits test.fits -h1 --write=/,"My keywords"   \
 When nothing is given afterwards, the header integrity keywords @code{DATASUM} 
and @code{CHECKSUM} will be calculated and written/updated.
 The calculation and writing is done fully by CFITSIO, therefore they comply 
with the FITS standard 
4.0@footnote{@url{https://fits.gsfc.nasa.gov/standard40/fits_standard40aa-le.pdf}}
 that defines these keywords (its Appendix J).
 
-If a value is given (e.g., @option{--write=checksum,MyOwnCheckSum}), then 
CFITSIO won't be called to calculate these two keywords and the value (as well 
as possible comment and unit) will be written just like any other keyword.
+If a value is given (e.g., @option{--write=checksum,MyOwnCheckSum}), then 
CFITSIO will not be called to calculate these two keywords and the value (as 
well as possible comment and unit) will be written just like any other keyword.
 This is generally not recommended since @code{CHECKSUM} is a reserved FITS 
standard keyword.
-If you want to calculate the checksum with another hashing standard manually 
and write it into the header, its is recommended to use another keyword name.
+If you want to calculate the checksum with another hashing standard manually 
and write it into the header, it is recommended to use another keyword name.
 
 In the FITS standard, @code{CHECKSUM} depends on the HDU's data @emph{and} 
header keywords, it will therefore not be valid if you make any further changes 
to the header after writing the @code{CHECKSUM} keyword.
 This includes any further keyword modification options in the same call to the 
Fits program.
@@ -12210,7 +12208,7 @@ Therefore, it is recommended to write these keywords as 
the last keywords that a
 You can use the @option{--verify} option (described below) to verify the 
values of these two keywords.
 
 @item datasum
-Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that 
doesn't depend on the header keywords, only the data).
+Similar to @option{checksum}, but only write the @code{DATASUM} keyword (that 
does not depend on the header keywords, only the data).
 @end table
 
 @item -a STR
@@ -12234,7 +12232,7 @@ This is because FITS keyword strings usually have lots 
of space characters, if t
 See the ``Quoting'' section of the GNU Bash manual for more information if 
your keyword has the special characters @key{$}, @key{`}, or @key{\}.
 
 You can also use @option{--asis} to copy multiple keywords from one file to 
another.
-But the process will be a little more complicated. So we'll show the process 
as the simple shell script below.
+But the process will be a little more complicated. So we will show the process 
as the simple shell script below.
 You can customize it in the first block of variable definitions:
 1) set the names of the keywords you want to copy: it can be as many keys as 
you want, just put a `@code{\|}' between them.
 2) The input FITS file (where keywords should be read from) and its HDU.
@@ -12263,7 +12261,7 @@ astfits $ifits -h$ihdu \
 IFS=$oIFS
 @end example
 
-Since its not too long, you can also simply put the variable values of the 
first block into the second, and write it directly on the command line (if its 
just for one time).
+Since it is not too long, you can also simply put the variable values of the 
first block into the second, and write it directly on the command line (if it 
is just for one time).
 
 @item -H STR
 @itemx --history STR
@@ -12297,8 +12295,8 @@ Verify the @code{DATASUM} and @code{CHECKSUM} data 
integrity keywords of the FIT
 See the description under the @code{checksum} (under @option{--write}, above) 
for more on these keywords.
 
 This option will print @code{Verified} for both keywords if they can be 
verified.
-Otherwise, if they don't exist in the given HDU/extension, it will print 
@code{NOT-PRESENT}, and if they cannot be verified it will print 
@code{INCORRECT}.
-In the latter case (when the keyword values exist but can't be verified), the 
Fits program will also return with a failure.
+Otherwise, if they do not exist in the given HDU/extension, it will print 
@code{NOT-PRESENT}, and if they cannot be verified it will print 
@code{INCORRECT}.
+In the latter case (when the keyword values exist but cannot be verified), the 
Fits program will also return with a failure.
 
 By default this function will also print a short description of the 
@code{DATASUM} AND @code{CHECKSUM} keywords.
 You can suppress this extra information with @code{--quiet} option.
@@ -12336,9 +12334,9 @@ Please consider the notes below when copying keywords 
with names:
 @itemize
 @item
 If the number of characters in the name is more than 8, CFITSIO will place a 
@code{HIERARCH} before it.
-In this case simply give the name and don't give the @code{HIERARCH} (which is 
a constant and not considered part of the keyword name).
+In this case simply give the name and do not give the @code{HIERARCH} (which 
is a constant and not considered part of the keyword name).
 @item
-If your keyword name is composed only of digits, don't give it as the first 
name given to @option{--copykeys}.
+If your keyword name is composed only of digits, do not give it as the first 
name given to @option{--copykeys}.
 Otherwise, it will be confused with the range format above.
 You can safely give an only-digit keyword name as the second, or third 
requested keywords.
 @item
@@ -12371,7 +12369,7 @@ This option can also interpret the older FITS date 
format (@code{DD/MM/YYThh:mm:
 In this case (following the GNU C Library), this option will make the 
following assumption: values 68 to 99 correspond to the years 1969 to 1999, and 
values 0 to 68 as the years 2000 to 2068.
 
 This is a very useful option for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
-The advantage of working with the Unix epoch time is that you don't have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
+The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
 
 @item --wcscoordsys=STR
 @cindex Galactic coordinate system
@@ -12386,7 +12384,7 @@ Convert the coordinate system of the image's world 
coordinate system (WCS) to th
 
 For example with the command below, @file{img-eq.fits} will have an identical 
dataset (pixel values) as @file{image.fits}.
 However, the WCS coordinate system of @file{img-eq.fits} will be the 
equatorial coordinate system in the Julian calendar epoch 2000 (which is the 
most common epoch used today).
-Fits will automatically extract the current coordinate system of 
@file{image.fits} and as long as its one of the recognized coordinate systems 
listed below, it will do the conversion.
+Fits will automatically extract the current coordinate system of 
@file{image.fits} and as long as it is one of the recognized coordinate systems 
listed below, it will do the conversion.
 
 @example
 $ astfits image.fits --coordsys=eq-j2000 --output=img-eq.fits
@@ -12427,7 +12425,7 @@ Where @file{DDD} is the value given to this option 
(desired output distortion).
 
 Note that all possible conversions between all standards are not yet supported.
 If the requested conversion is not supported, an informative error message 
will be printed.
-If this happens, please let us know and we'll try our best to add the 
respective conversions.
+If this happens, please let us know and we will try our best to add the 
respective conversions.
 
 For example with the command below, you can be sure that if @file{in.fits} has 
a distortion in its WCS, the distortion of @file{out.fits} will be in the SIP 
standard.
 
@@ -12465,9 +12463,9 @@ $ astfits in.fits --wcsdistortion=SIP --output=out.fits
 @pindex @r{ConvertType (}astconvertt@r{)}
 The FITS format used in astronomy was defined mainly for archiving, 
transmission, and processing.
 In other situations, the data might be useful in other formats.
-For example, when you are writing a paper or report, or if you are making 
slides for a talk, you can't use a FITS image.
+For example, when you are writing a paper or report, or if you are making 
slides for a talk, you cannot use a FITS image.
 Other image formats should be used.
-In other cases you might want your pixel values in a table format as plain 
text for input to other programs that don't recognize FITS.
+In other cases you might want your pixel values in a table format as plain 
text for input to other programs that do not recognize FITS.
 ConvertType is created for such situations.
 The various types will increase with future updates and based on need.
 
@@ -12475,9 +12473,9 @@ The conversion is not only one way (from FITS to other 
formats), but two ways (e
 So you can also convert a JPEG image or text file into a FITS image.
 Basically, other than EPS/PDF, you can use any of the recognized formats as 
different color channel inputs to get any of the recognized outputs.
 
-Before explaining the options and arguments (in @ref{Invoking astconvertt}), 
we'll start with a short discussion on the difference between raster and vector 
graphics in @ref{Raster and Vector graphics}.
+Before explaining the options and arguments (in @ref{Invoking astconvertt}), 
we will start with a short discussion on the difference between raster and 
vector graphics in @ref{Raster and Vector graphics}.
 In ConvertType, vector graphics are used to add markers over your originally 
rasterized data, producing high quality images, ready to be used in your 
exciting papers.
-We'll continue with a description of the recognized files types in 
@ref{Recognized file formats}, followed a short introduction to digital color 
in @ref{Color}.
+We Will continue with a description of the recognized files types in 
@ref{Recognized file formats}, followed a short introduction to digital color 
in @ref{Color}.
 A tutorial on how to add markers over an image is then given in @ref{Marking 
objects for publication} and we conclude with a @LaTeX{} based solution to add 
coordinates over an image.
 
 @menu
@@ -12512,7 +12510,7 @@ The most common vector graphics format is PDF for 
document sharing or SVG for we
 The pixels of a raster image can be shown as vector-based squares with 
different shades, so vector graphics can generally also support raster graphics.
 This is very useful when you want to add some graphics over an image to help 
your discussion (for examle a @mymath{+} over your object of interest).
 However, vector graphics is not optimized for rasterized data (which are 
usually also noisy!), and can ither not display nicely, or result in much 
larger file volumns (in bytes).
-Therefore, if its not necessary to add any marks over a FITS image, for 
example, it may be better to store it in a rasterized format.
+Therefore, if it is not necessary to add any marks over a FITS image, for 
example, it may be better to store it in a rasterized format.
 
 The distinction between the vector and raster graphics is also the primary 
theme behind Gnuastro's logo, see @ref{Logo of Gnuastro}.
 
@@ -12523,7 +12521,7 @@ The distinction between the vector and raster graphics 
is also the primary theme
 The various standards and the file name extensions recognized by ConvertType 
are listed below.
 For a review on the difference between Raster and Vector graphics, see 
@ref{Raster and Vector graphics}.
 For a review on the concept of color and channels, see @ref{Color}.
-Currently, except for the FITS format, Gnuastro uses the file name's suffix to 
identify the format, so if the file's name doesn't end with one of the suffixes 
mentioned below, it won't be recognized.
+Currently, except for the FITS format, Gnuastro uses the file name's suffix to 
identify the format, so if the file's name does not end with one of the 
suffixes mentioned below, it will not be recognized.
 
 @table @asis
 @item FITS or IMH
@@ -12561,9 +12559,7 @@ However, unlike FITS, it can only store images, it has 
no constructs for tables.
 Also unlike FITS, each `directory' of a TIFF file can have a multi-channel 
(e.g., RGB) image.
 Another (inconvenient) difference with the FITS standard is that keyword names 
are stored as numbers, not human-readable text.
 
-However, outside of astronomy, because of its support of different numeric
-data types, many fields use TIFF images for accurate (for example 16-bit
-integer or floating point for example) imaging data.
+However, outside of astronomy, because of its support of different numeric 
data types, many fields use TIFF images for accurate (for example 16-bit 
integer or floating point for example) imaging data.
 
 Currently ConvertType can only read TIFF images, if you are interested in
 writing TIFF images, please get in touch with us.
@@ -12611,7 +12607,7 @@ The Portable Document Format (PDF) is currently the 
most common format for docum
 It is a vector graphics format, allowing abstract constructs like marks or 
borders.
 
 The PDF format is based on Postscript, so it shares all the features mentioned 
above for EPS.
-To be able to display its programmed content or print, a Postscript file needs 
to pass through a processor or compiler.
+To be able to display it is programmed content or print, a Postscript file 
needs to pass through a processor or compiler.
 A PDF file can be thought of as the processed output of the PostScript 
compiler.
 PostScript, EPS and PDF were created and are registered by Adobe Systems.
 
@@ -12703,7 +12699,7 @@ Therefore, the FITS format that is used to store 
astronomical datasets is inhere
 @cindex Pseudo color
 When a subject has been imaged in multiple filters, you can feed each 
different filter into the red, green and blue channels of your monitor and 
obtain a false-colored visualization.
 The reason we say ``false-color'' (or pseudo color) is that generally, the 
three data channels you provide are not from the same Red, Green and Blue 
filters of your monitor!
-So the observed color on your monitor doesn't correspond the physical 
``color'' that you would have seen if you looked at the object by eye.
+So the observed color on your monitor does not correspond the physical 
``color'' that you would have seen if you looked at the object by eye.
 Nevertheless, it is good (and sometimes necessary) for visualization (of 
special features).
 
 In ConvertType, you can do this by giving each separate single-channel dataset 
(for example in the FITS image format) as an argument (in the proper order), 
then asking for the output in a format that supports multi-channel datasets 
(for example see the command below, or @ref{ConvertType input and output}).
@@ -12727,7 +12723,7 @@ As a result, there is a lot of freedom in visualizing a 
single-channel dataset.
 The mapping of single-channel values to multi-channel colors is called called 
a ``color map''.
 Since more information can be put in multiple channels, this usually results 
in better visualizing the dynamic range of your single-channel data.
 In ConvertType, you can use the @option{--colormap} option to choose between 
different mappings of mono-channel inputs, see @ref{Invoking astconvertt}.
-Below, we'll review two of the basic color maps, please see the description of 
@option{--colormap} in @ref{Invoking astconvertt} for the full list.
+Below, we will review two of the basic color maps, please see the description 
of @option{--colormap} in @ref{Invoking astconvertt} for the full list.
 
 @itemize
 @item
@@ -12735,7 +12731,7 @@ Below, we'll review two of the basic color maps, please 
see the description of @
 @cindex Colormap, gray-scale
 The most basic colormap is shades of black (because of its strong contrast 
with white).
 This scheme is called @url{https://en.wikipedia.org/wiki/Grayscale, Grayscale}.
-But ultimately, the black is just one color, so with Grayscale, you aren't 
using the full dynamic range of the three-channel monitor effectively.
+But ultimately, the black is just one color, so with Grayscale, you are not 
using the full dynamic range of the three-channel monitor effectively.
 To help in visualization, more complex mappings can be defined.
 
 @item
@@ -12743,7 +12739,7 @@ A slightly more complex color map can be defined when 
you scale the values to a
 The increased usage of the monitor's 3-channel color space is indeed better, 
but the resulting images can be un-''natural'' to the eye.
 @end itemize
 
-Since grayscale is a commonly used mapping of single-valued datasets, we'll 
continue with a closer look at how it is stored.
+Since grayscale is a commonly used mapping of single-valued datasets, we will 
continue with a closer look at how it is stored.
 One way to represent a gray-scale image in different color spaces is to use 
the same proportions of the primary colors in each pixel.
 This is the common way most FITS image viewers work: for each pixel, they fill 
all the channels with the single value.
 While this is necessary for displaying a dataset, there are downsides when 
storing/saving this type of grayscale visualization (for example in a paper).
@@ -12805,7 +12801,7 @@ In such cases you can use the script below to align the 
images into approximatel
 
 The script below does the job using Gnuastro's @ref{Warp} and @ref{Crop} 
programs.
 Simply copy the lines below into a plain-text file with your favorite text 
editor and save it as @file{my-align.sh}.
-Don't forget to set the variables of the first three lines to specify the file 
names (without the @file{.fits} suffix) and the HDUs of your inputs.
+Do Not forget to set the variables of the first three lines to specify the 
file names (without the @file{.fits} suffix) and the HDUs of your inputs.
 These four lines are all you need to edit, leave the rest unchanged.
 Also, if you are copy/pasting the script from a PDF, be careful that the 
single-quotes used in AWK may need to be corrected.
 
@@ -12880,7 +12876,7 @@ Because of the modular philosophy of Gnuastro, 
ConvertType is only focused on co
 But to present your results in a slide or paper, you will often need to 
annotate the raw JPEG or PDF with some of the features above.
 The good news is that there are many powerful plotting programs that you can 
use to add such annotations.
 As a result, there is no point in making a new one, specific to Gnuastro.
-In this section, we'll demonstrate this using the very powerful 
PGFPlots@footnote{@url{http://mirrors.ctan.org/graphics/pgf/contrib/pgfplots/doc/pgfplots.pdf}}
 package of @LaTeX{}.
+In this section, we will demonstrate this using the very powerful 
PGFPlots@footnote{@url{http://mirrors.ctan.org/graphics/pgf/contrib/pgfplots/doc/pgfplots.pdf}}
 package of @LaTeX{}.
 
 @cartouche
 @noindent
@@ -12894,11 +12890,11 @@ Therefore the full script (except for the data 
download step) is available in @r
 PGFPlots uses the same @LaTeX{} graphic engine that typesets your paper/slide.
 Therefore when you build your plots and figures using PGFPlots (and its 
underlying package 
PGF/TiKZ@footnote{@url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf}})
 your plots will blend beautifully within your text: same fonts, same colors, 
same line properties and etc.
 Since most papers (and presentation slides@footnote{To build slides, @LaTeX{} 
has packages like Beamer, see 
@url{http://mirrors.ctan.org/macros/latex/contrib/beamer/doc/beameruserguide.pdf}})
 are made with @LaTeX{}, PGFPlots is therefore the best tool for those who use 
@LaTeX{} to create documents.
-PGFPlots also doesn't need any extra dependencies beyond a basic/minimal 
@TeX{}-live installation, so it is much more reliable than tools like 
Matplotlib in Python that have hundreds of fast-evolving 
dependencies@footnote{See Figure 1 of Alliez et al. 2020 at 
@url{https://arxiv.org/pdf/1905.11123.pdf}}.
+PGFPlots also does not need any extra dependencies beyond a basic/minimal 
@TeX{}-live installation, so it is much more reliable than tools like 
Matplotlib in Python that have hundreds of fast-evolving 
dependencies@footnote{See Figure 1 of Alliez et al. 2020 at 
@url{https://arxiv.org/pdf/1905.11123.pdf}}.
 
-To demonstrate this, we'll create a surface brightness image of a galaxy in 
the F160W filter of the ABYSS 
survey@footnote{@url{http://research.iac.es/proyecto/abyss}}.
+To demonstrate this, we will create a surface brightness image of a galaxy in 
the F160W filter of the ABYSS 
survey@footnote{@url{http://research.iac.es/proyecto/abyss}}.
 In the code-block below, let's make a ``build'' directory to keep intermediate 
files and avoid populating the source.
-Afterwards, we'll download the full image and crop out a 20 arcmin wide image 
around the galaxy with the commands below.
+Afterwards, we will download the full image and crop out a 20 arcmin wide 
image around the galaxy with the commands below.
 You can run these commands in an empty directory.
 
 @example
@@ -12908,7 +12904,7 @@ $ astcrop ah_f160w.fits --center=53.1616278,-27.7802446 
--mode=wcs \
           --width=20/3600 --output=build/crop.fits
 @end example
 
-To better show the low surface brightness (LSB) outskirts, we'll warp the 
image, then convert the pixel units to surface brightness with the commands 
below.
+To better show the low surface brightness (LSB) outskirts, we will warp the 
image, then convert the pixel units to surface brightness with the commands 
below.
 It is very important that the warping is done @emph{before} the conversion to 
surface brightness (in units of mag/arcsec@mymath{^2}), because the definition 
of surface brightness is non-linear.
 For more, see the Surface brightness topic of @ref{Brightness flux magnitude}, 
and for a more complete tutorial, see @ref{FITS images in a publication}.
 
@@ -12922,7 +12918,7 @@ $ astarithmetic build/scaled.fits $zeropoint $pixarea 
counts-to-sb \
 @end example
 
 We are now ready to convert the surface brightness image into a PDF.
-To better show the LSB features, we'll also limit the color range with the 
@code{--fluxlow} and @option{--fluxhigh} options: all pixels with a surface 
brightness brighter than 22 mag/arcsec@mymath{^2} will be shown as black, and 
all pixels with a surface brightness fainter than 30 mag/arcsec@mymath{^2} will 
be white.
+To better show the LSB features, we will also limit the color range with the 
@code{--fluxlow} and @option{--fluxhigh} options: all pixels with a surface 
brightness brighter than 22 mag/arcsec@mymath{^2} will be shown as black, and 
all pixels with a surface brightness fainter than 30 mag/arcsec@mymath{^2} will 
be white.
 These thresholds are being defined as variables, because we will also need 
them below (to pass into PGFPlots).
 We will also set @option{--borderwidth=0}, because the coordinate system we 
will add over the image will effectively be a border for the image (separating 
it from the background).
 
@@ -12938,8 +12934,8 @@ Also, please open @file{sb.fits} in DS9 (or any other 
FITS viewer) and play with
 Can the surface brightness limits be changed to better show the LSB structure?
 If so, you are free to change the limits above.
 
-We now have the printable PDF representation of the image, but as discussed 
above, its not enough for a paper.
-We'll add 1) a thick line showing the size of 20 kpc (kilo parsecs) at the 
redshift of the central galaxy, 2) coordinates and 3) a color bar, showing the 
surface brightness level of each grayscale level.
+We now have the printable PDF representation of the image, but as discussed 
above, it is not enough for a paper.
+We Will add 1) a thick line showing the size of 20 kpc (kilo parsecs) at the 
redshift of the central galaxy, 2) coordinates and 3) a color bar, showing the 
surface brightness level of each grayscale level.
 
 To get the first job done, we first need to know the redshift of the central 
galaxy.
 To do this, we can use Gnuastro's Query program to look into all the objects 
in NED within this image (only asking for the RA, Dec and redshift columns).
@@ -12957,7 +12953,7 @@ $ echo $redshift
 Now that we know the redshift of the central object, we can define the 
coordinates of the thick line that will show the length of 20 kpc at that 
redshift.
 It will be a horizontal line (fixed Declination) across a range of RA.
 The start of this thick line will be located at the top edge of the image (at 
the 95-percent of the width and height of the image).
-With the commands below we'll find the three necessary parameters (one 
declination and two RAs).
+With the commands below we will find the three necessary parameters (one 
declination and two RAs).
 Just note that in astronomical images, RA increases to the left/east, which is 
the reason we are using the minimum and @code{+} to find the RA starting point.
 
 @example
@@ -12999,7 +12995,7 @@ Please open the macros file after these commands and 
have a look to see if they
 Another set of macros we will need to feed into PGFPlots is the coordinates of 
the image corners.
 Fortunately the @code{coverage} variable found above is also useful here.
 We just need to extract each item before feeding it into the macros.
-To do this, we'll use AWK and keep each value with the temporary shell 
variable `@code{v}'.
+To do this, we will use AWK and keep each value with the temporary shell 
variable `@code{v}'.
 
 @example
 $ v=$(echo $coverage | awk '@{print $1@}')
@@ -13013,7 +13009,7 @@ $ printf '\\newcommand@{\\maCropDecMax@}'"@{$v@}\n" >> 
$macros
 @end example
 
 Finally, we also need to pass some other numbers to PGFPlots: 1) the major 
tick distance (in the coordinate axes that will be printed on the edge of the 
image).
-We'll assume 7 ticks for this image.
+We Will assume 7 ticks for this image.
 2) The minimum and maximum surface brightness values that we gave to 
ConvertType when making the PDF; PGFPlots will define its color-bar based on 
these two values.
 
 @example
@@ -13104,7 +13100,7 @@ There are also comments which will show up as a 
different color when you copy th
 
 Finally, we need another simple @LaTeX{} source for the main PDF ``report'' 
that will host this figure.
 This can actually be your paper or slides for example.
-Here, we'll suffice to the minimal working example.
+Here, we will suffice to the minimal working example.
 
 @verbatim
 \documentclass{article}
@@ -13139,8 +13135,8 @@ You can write anything here.
 @end verbatim
 
 You are now ready to create the PDF.
-But @LaTeX{} creates many temporary files, so to avoid populating our 
top-level directory, we'll copy the two @file{.tex} files into the build 
directory, go there and run @LaTeX{}.
-Before running it though, we'll first delete all the files that have the name 
pattern @file{*-figure0*}, these are ``external'' files created by 
TiKZ+PGFPlots, including the actual PDF of the figure.
+But @LaTeX{} creates many temporary files, so to avoid populating our 
top-level directory, we will copy the two @file{.tex} files into the build 
directory, go there and run @LaTeX{}.
+Before running it though, we will first delete all the files that have the 
name pattern @file{*-figure0*}, these are ``external'' files created by 
TiKZ+PGFPlots, including the actual PDF of the figure.
 
 @example
 $ cp report.tex my-figure.tex build
@@ -13170,11 +13166,11 @@ Both 
TiKZ@footnote{@url{http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.
 
 In @ref{Annotations for figure in paper}, we each one of the steps to add 
annotations over an image were described in detail.
 So if you have understood the steps, but need to add annotations over an 
image, repeating those steps individually will be annoying.
-Therefore in this section, we'll summarize all the steps in a single script 
that you can simply copy-paste into a text editor, configure, and run.
+Therefore in this section, we will summarize all the steps in a single script 
that you can simply copy-paste into a text editor, configure, and run.
 
 @cartouche
 @noindent
-@strong{Necessary files:} To run this script, you will need an image to crop 
your object from (here assuming its called @file{ah_f160w.fits} with a certain 
zero point) and two @file{my-figure.tex} and @file{report.tex} files that were 
fully included in @ref{Annotations for figure in paper}.
+@strong{Necessary files:} To run this script, you will need an image to crop 
your object from (here assuming it is called @file{ah_f160w.fits} with a 
certain zero point) and two @file{my-figure.tex} and @file{report.tex} files 
that were fully included in @ref{Annotations for figure in paper}.
 Also, we have brought the redshift as a parameter here.
 But if the center of your image always points to your main object, you can 
also include the Query command to automatically find the object's redshift from 
NED.
 Alternatively, your image may already be cropped, in this case, you can remove 
the cropping step and
@@ -13288,7 +13284,7 @@ $ astconvertt image.fits --colormap=viridis --output=pdf
 $ astconvertt image.fits --marks=marks.fits --output=pdf
 
 ## Convert an image in JPEG to FITS (with multiple extensions
-## if its color):
+## if it has color):
 $ astconvertt image.jpg -oimage.fits
 
 ## Use three 2D arrays to create an RGB JPEG output (two are
@@ -13348,8 +13344,8 @@ $ astconvertt image.jpg --output=rgb.pdf
 @end example
 
 @noindent
-If multiple channels are given as input, and the output format doesn't support 
multiple color channels (for example FITS), ConvertType will put the channels 
in different HDUs, like the example below.
-After running the @command{astfits} command, if your JPEG file wasn't 
grayscale (single channel), you will see multiple HDUs in @file{channels.fits}.
+If multiple channels are given as input, and the output format does not 
support multiple color channels (for example FITS), ConvertType will put the 
channels in different HDUs, like the example below.
+After running the @command{astfits} command, if your JPEG file was not 
grayscale (single channel), you will see multiple HDUs in @file{channels.fits}.
 
 @example
 $ astconvertt image.jpg --output=channels.fits
@@ -13441,7 +13437,7 @@ The table below contains the usable names of the color 
maps that are currently s
 @itemx grey
 @cindex Colorspace, gray-scale
 Grayscale color map.
-This color map doesn't have any parameters.
+This color map does not have any parameters.
 The full dataset range will be scaled to 0 and @mymath{2^8-1=255} to be stored 
in the requested format.
 
 @item hsv
@@ -13479,7 +13475,7 @@ While SLS is good for visualizing on the monitor, 
SLS-inverse is good for printi
 @item --rgbtohsv
 When there are three input channels and the output is in the FITS format, 
interpret the three input channels as red, green and blue channels (RGB) and 
convert them to the hue, saturation, value (HSV) color space.
 
-The currently supported output formats of ConvertType don't have native 
support for HSV.
+The currently supported output formats of ConvertType do not have native 
support for HSV.
 Therefore this option is only supported when the output is in FITS format and 
each of the hue, saturation and value arrays can be saved as one FITS extension 
in the output for further analysis (for example to select a certain color).
 
 @item -c STR
@@ -13530,7 +13526,7 @@ It can be useful if you plan to use those values for 
other purposes.
 
 @item -A
 @itemx --forcemin
-Enforce the value of @option{--fluxlow} (when its given), even if its smaller 
than the minimum of the dataset and the output is format supporting color.
+Enforce the value of @option{--fluxlow} (when it is given), even if it is 
smaller than the minimum of the dataset and the output is format supporting 
color.
 This is particularly useful when you are converting a number of images to a 
common image format like JPEG or PDF with a single command and want them all to 
have the same range of colors, independent of the contents of the dataset.
 Note that if the minimum value is smaller than @option{--fluxlow}, then this 
option is redundant.
 
@@ -13567,7 +13563,7 @@ A fully working demo on adding marks is provided in 
@ref{Marking objects for pub
 An important factor to consider when drawing vector graphics is that vector 
graphics standards (the PostScript standard in this case) use a ``point'' as 
the primary unit of line thickness or font size.
 Such that 72 points correspond to 1 inch (or 2.54 centimeters).
 In other words, there are roughly 3 PostScript points in every millimeter.
-On the other hand, the pixels of the images you plan to show as the background 
don't have any real size!
+On the other hand, the pixels of the images you plan to show as the background 
do not have any real size!
 Pixels are abstract and can be associated with any print-size.
 
 In ConvertType, the print-size of your final image is set with the 
@option{--widthincm} option (see @ref{ConvertType input and output}).
@@ -13589,7 +13585,7 @@ Unfortunately in the document structuring convention of 
the PostScript language,
 So the border values only have to be specified in integers.
 To have a final border that is thinner than one PostScript point in your 
document, you can ask for a larger width in ConvertType and then scale down the 
output EPS or PDF file in your document preparation program.
 For example by setting @command{width} in your @command{includegraphics} 
command in @TeX{} or @LaTeX{} to be larger than the value to 
@option{--widthincm}.
-Since it is vector graphics, the changes of size have no effect on the quality 
of your output (pixels don't get different values).
+Since it is vector graphics, the changes of size have no effect on the quality 
of your output (pixels do not get different values).
 
 @item --marks=STR
 Draw vector graphics (infinite resolution) marks over the image.
@@ -13756,7 +13752,7 @@ $ astconvertt --listcolors | astttable -ocolors.fits
 @cartouche
 @noindent
 @strong{macOS terminal colors}: as of August 2022, the default macOS terminal 
(iTerm) does not support 24-bit colors!
-The output of @option{--listlines} therefore doesn't display the actual colors 
(you can only use the color names).
+The output of @option{--listlines} therefore does not display the actual 
colors (you can only use the color names).
 One tested solution is to install and use @url{https://iterm2.com, iTerm2}, 
which is free software and available in 
@url{https://formulae.brew.sh/cask/iterm2, Homebrew}.
 iTerm2 is described as a successor for iTerm and works on macOS 10.14 
(released in September 2018) or newer.
 @end cartouche
@@ -13778,7 +13774,7 @@ Column name or number that contains the font for the 
displayed text under the ma
 This is only relevant if @option{--marktext} is called.
 The font should be accessible by Ghostscript.
 
-If you aren't familiar with the available fonts on your system's Ghostscript, 
you can use the @option{--showfonts} option to see all the fonts in a custom 
PDF file (one page per font).
+If you are not familiar with the available fonts on your system's Ghostscript, 
you can use the @option{--showfonts} option to see all the fonts in a custom 
PDF file (one page per font).
 If you are already familiar with the font you want, but just want to make sure 
about its presence (or spelling!), you can get a list (on standard output) of 
all the available fonts with the @option{--listfonts} option.
 Both are described below.
 
@@ -13803,7 +13799,7 @@ Simply pressing the left/right keys will then nicely 
show each fonts separately.
 @item --listfonts
 Print (to standard output) the names of all available fonts in Ghostscript 
that you can use for the @option{--markfonts} column.
 The available fonts can differ from one system to another (depending on how 
Ghostscript was configured in that system).
-If you aren't already familiar with the shape of each font, please use 
@option{--showfonts} (described above).
+If you are not already familiar with the shape of each font, please use 
@option{--showfonts} (described above).
 
 @end table
 
@@ -13839,7 +13835,7 @@ This creates limitations for their generic use.
 
 Table is Gnuastro's solution to this problem.
 Table has a large set of operations that you can directly do on any recognized 
table (like selecting certain rows, doing arithmetic on the columns and etc).
-For operations that Table doesn't do internally, FITS tables (ASCII or binary) 
are directly accessible to the users of Unix-like operating systems (in 
particular those working the command-line or shell, see @ref{Command-line 
interface}).
+For operations that Table does not do internally, FITS tables (ASCII or 
binary) are directly accessible to the users of Unix-like operating systems (in 
particular those working the command-line or shell, see @ref{Command-line 
interface}).
 With Table, a FITS table (in binary or ASCII formats) is only one command away 
from AWK (or any other tool you want to use).
 Just like a plain text file that you read with the @command{cat} command.
 You can pipe the output of Table into any other tool for higher-level 
processing, see the examples in @ref{Invoking asttable} for some simple 
examples.
@@ -13891,7 +13887,7 @@ $ echo "My path is: $PATH"
 @end example
 
 If you actually want to return the literal string @code{$PATH}, not the value 
in the @code{PATH} variable (like the scenario here in column arithmetic), you 
should put it in single quotes like below.
-The printed value here will include the @code{$}, please try it to see for 
your self and compare to above.
+The printed value here will include the @code{$}, please try it to see for 
yourself and compare to above.
 
 @example
 $ echo 'My path is: $PATH'
@@ -13913,12 +13909,12 @@ When the columns have descriptive names, the 
command/script actually becomes muc
 It is also independent of the low-level table structure: for the second 
command, the column numbers of the @code{AWAV} and @code{SPECTRUM} columns in 
@file{table.fits} is irrelevant.
 
 Column arithmetic changes the values of the data within the column.
-So the old column meta data can't be used any more.
+So the old column meta data cannot be used any more.
 By default the output column of the arithmetic operation will be given a 
generic metadata (for example its name will be @code{ARITH_1}, which is hardly 
useful!).
 But meta data are critically important and it is good practice to always have 
short, but descriptive, names for each columns, units and also some comments 
for more explanation.
 To add metadata to a column, you can use the @option{--colmetadata} option 
that is described in @ref{Invoking asttable} and @ref{Operation precedence in 
Table}.
 
-Since the arithmetic expressions are a value to @option{--column}, it doesn't 
necessarily have to be a separate option, so the commands above are also 
identical to the command below (note that this only has one @option{-c} option).
+Since the arithmetic expressions are a value to @option{--column}, it does not 
necessarily have to be a separate option, so the commands above are also 
identical to the command below (note that this only has one @option{-c} option).
 Just be very careful with the quoting!
 With the @option{--colmetadata} option, we are also giving a name, units and a 
comment to the third column.
 
@@ -13935,7 +13931,7 @@ Using column numbers can get complicated: if the number 
is smaller than the main
 Otherwise (when the requested column number is larger than the main input's 
number of columns), the final output (after appending all the columns from all 
the possible files) column number will be used.
 
 Almost all the arithmetic operators of @ref{Arithmetic operators} are also 
supported for column arithmetic in Table.
-In particular, the few that are not present in the Gnuastro 
library@footnote{For a list of the Gnuastro library arithmetic operators, 
please see the macros starting with @code{GAL_ARITHMETIC_OP} and ending with 
the operator name in @ref{Arithmetic on datasets}.} aren't yet supported for 
column arithmetic.
+In particular, the few that are not present in the Gnuastro 
library@footnote{For a list of the Gnuastro library arithmetic operators, 
please see the macros starting with @code{GAL_ARITHMETIC_OP} and ending with 
the operator name in @ref{Arithmetic on datasets}.} are not yet supported for 
column arithmetic.
 Besides the operators in @ref{Arithmetic operators}, several operators are 
only available in Table to use on table columns.
 
 
@@ -14015,7 +14011,7 @@ The distance (along a great circle) on a sphere between 
two points is calculated
 Convert the hour-wise Right Ascension (RA) string, in the sexagesimal format 
of @code{_h_m_s} or @code{_:_:_}, to degrees.
 Note that the input column has to have a string format.
 In FITS tables, string columns are well-defined.
-For plain-text tables, please follow the standards defined in @ref{Gnuastro 
text table format}, otherwise the string column won't be read.
+For plain-text tables, please follow the standards defined in @ref{Gnuastro 
text table format}, otherwise the string column will not be read.
 @example
 $ asttable catalog.fits -c'arith RA ra-to-degree'
 $ asttable catalog.fits -c'arith $5 ra-to-degree'
@@ -14058,7 +14054,7 @@ $ astfits *.fits --keyvalue=DATE-OBS --colinfoinstdout \
           | asttable --sort=UNIXSEC
 @end example
 
-If you don't need to see the Unix-seconds any more, you can add a 
@option{-cFILENAME} (short for @option{--column=FILENAME}) at the end.
+If you do not need to see the Unix-seconds any more, you can add a 
@option{-cFILENAME} (short for @option{--column=FILENAME}) at the end.
 For more on @option{--keyvalue}, see @ref{Keyword inspection and manipulation}.
 
 @item date-to-millisec
@@ -14075,7 +14071,7 @@ See the description of that operator for an example.
 @node Operation precedence in Table, Invoking asttable, Column arithmetic, 
Table
 @subsection Operation precedence in Table
 
-The Table program can do many operations on the rows and columns of the input 
tables and they aren't always applied in the order you call the operation on 
the command-line.
+The Table program can do many operations on the rows and columns of the input 
tables and they are not always applied in the order you call the operation on 
the command-line.
 In this section we will describe which operation is done before/after which 
operation.
 Knowing this precedence table is important to avoid confusion when you ask for 
more than one operation.
 For a description of each option, please see @ref{Invoking asttable}.
@@ -14089,7 +14085,7 @@ This can therefore be called at the end of an 
arbitrarily long Table command onl
 
 @item Column selection (@option{--column})
 When this option is given, only the columns given to this option (from the 
main input) will be used for all future steps.
-When @option{--column} (or @option{-c}) isn't given, then all the main input's 
columns will be used in the next steps.
+When @option{--column} (or @option{-c}) is not given, then all the main 
input's columns will be used in the next steps.
 
 @item Column(s) from other file(s) (@option{--catcolumnfile} and 
@option{--catcolumnhdu}, @option{--catcolumns})
 When column concatenation (addition) is requested, columns from other tables 
(in other files, or other HDUs of the same FITS file) will be added after the 
existing columns read from the main input.
@@ -14125,7 +14121,7 @@ If the data types are different, you can use the type 
conversion operators of Ta
 @item
 @option{--notequal}: only keep rows without specified value in given column.
 @item
-@option{--noblank}: only keep rows that aren't blank in the given column(s).
+@option{--noblank}: only keep rows that are not blank in the given column(s).
 @end itemize
 These options take certain column(s) as input and remove some rows from the 
full table (all columns), based on the given limitations.
 They can be called any number of times (to limit the final rows based on 
values in different columns for example).
@@ -14133,7 +14129,7 @@ Since these are row-rejection operations, their 
internal order is irrelevant.
 In other words, it makes no difference if @option{--equal} is called before or 
after @option{--range} for example.
 
 As a side-effect, because NaN/blank values are defined to fail on any 
condition, these operations will also remove rows with NaN/blank values in the 
specified column they are checking.
-Also, the columns that are used for these operations don't necessarily have to 
be in the final output table (you may not need the column after doing the 
selection based on it).
+Also, the columns that are used for these operations do not necessarily have 
to be in the final output table (you may not need the column after doing the 
selection based on it).
 
 Even though these options are applied after merging columns from other tables, 
currently their condition-columns can only come from the main input table.
 In other words, even though the rows of the added columns (from another file) 
will also be selected with these options, the condition to keep/reject rows 
cannot be taken from the newly added columns.
@@ -14167,7 +14163,7 @@ For more on column arithmetic, see @ref{Column 
arithmetic}.
 Changing column metadata is necessary after column arithmetic or adding new 
columns from other tables (that were done above).
 
 @item Output row selection (@option{--noblankend})
-Only keep the output rows that don't have a blank value in the given column(s).
+Only keep the output rows that do not have a blank value in the given 
column(s).
 For example, you may need to apply arithmetic operations on the columns 
(through @ref{Column arithmetic}) before rejecting the undesired rows.
 After the arithmetic operation is done, you can use the @code{where} operator 
to set the non-desired columns to NaN/blank and use @option{--noblankend} 
option to remove them just before writing the output.
 In other scenarios, you may want to remove blank values based on columns in 
another table.
@@ -14200,7 +14196,7 @@ During the process, besides a name, we also set a unit 
and description for the n
 These metadata entries are @emph{very important}, so always be sure to add 
metadata after doing column arithmetic.
 @item
 The lowest precedence operation is @option{--noblankend=MULTIP}.
-So only rows that aren't blank/NaN in the @code{MULTIP} column are kept.
+So only rows that are not blank/NaN in the @code{MULTIP} column are kept.
 @item
 Finally, the output table (with three columns) is written to the command-line.
 If you also want to print the column metadata, you can use the 
@option{--colinfoinstdout} option.
@@ -14213,7 +14209,7 @@ Alternatively, if you want the output in a file, you 
can use the @option{--outpu
 In this case you can pipe the output of Table into another call of Table and 
use the @option{--colinfoinstdout} option to preserve the metadata between the 
two calls.
 
 For example, let's assume that you want to sort the output table from the 
example command above based on the new @code{MULTIP} column.
-Since sorting is done prior to column arithmetic, you can't do it in one 
command, but you can circumvent this limitation by simply piping the output 
(including metadata) to another call to Table:
+Since sorting is done prior to column arithmetic, you cannot do it in one 
command, but you can circumvent this limitation by simply piping the output 
(including metadata) to another call to Table:
 
 @example
 asttable table.fits -cRA,DEC --noblankend=MULTIP --colinfoinstdout \
@@ -14366,13 +14362,13 @@ The number of rows in the file(s) given to this 
option has to be the same as the
 By default all the columns of the given file will be appended, if you only 
want certain columns to be appended, use the @option{--catcolumns} option to 
specify their name or number (see @ref{Selecting table columns}).
 Note that the columns given to @option{--catcolumns} must be present in all 
the given files (if this option is called more than once with more than one 
file).
 
-If the file given to this option is a FITS file, its necessary to also define 
the corresponding HDU/extension with @option{--catcolumnhdu}.
+If the file given to this option is a FITS file, it is necessary to also 
define the corresponding HDU/extension with @option{--catcolumnhdu}.
 Also note that no operation (for example row selection, arithmetic or etc) is 
applied to the table given to this option.
 
 If the appended columns have a name, and their name is already present in the 
table before adding those columns, the column names of each file will be 
appended with a @code{-N}, where @code{N} is a counter starting from 1 for each 
appended table.
 Just note that in the FITS standard (and thus in Gnuastro), column names are 
not case-sensitive.
 
-This is done because when concatenating columns from multiple tables (more 
than two) into one, they may have the same name, and its not good practice to 
have multiple columns with the same name.
+This is done because when concatenating columns from multiple tables (more 
than two) into one, they may have the same name, and it is not good practice to 
have multiple columns with the same name.
 You can disable this feature with @option{--catcolumnrawname}.
 Generally, you can use the @option{--colmetadata} option to update column 
metadata in the same command, after all the columns have been concatenated.
 
@@ -14402,7 +14398,7 @@ Because @option{--catcolumnfile} will load all the 
columns in one opening of the
 @item -u STR/INT
 @itemx --catcolumnhdu=STR/INT
 The HDU/extension of the FITS file(s) that should be concatenated, or 
appended, by column with @option{--catcolumnfile}.
-If @option{--catcolumn} is called more than once with more than one FITS file, 
its necessary to call this option more than once.
+If @option{--catcolumn} is called more than once with more than one FITS file, 
it is necessary to call this option more than once.
 The HDUs will be loaded in the same order as the FITS files given to 
@option{--catcolumnfile}.
 
 @item -C STR/INT
@@ -14412,7 +14408,7 @@ When this option is not given, all the columns will be 
concatenated.
 See @option{--catcolumnfile} for more.
 
 @item --catcolumnrawname
-Don't modify the names of the concatenated (appended) columns, see description 
in @option{--catcolumnfile}.
+Do Not modify the names of the concatenated (appended) columns, see 
description in @option{--catcolumnfile}.
 
 @item -R FITS/TXT
 @itemx --catrowfile=FITS/TXT
@@ -14429,7 +14425,7 @@ $ asttable a.fits --catrowfile=b.fits --catrowhdu=1 \
 
 @cartouche
 @noindent
-@strong{How to avoid repetition when adding rows:} this option will simply add 
the rows of multiple tables into one, it doesn't check their contents!
+@strong{How to avoid repetition when adding rows:} this option will simply add 
the rows of multiple tables into one, it does not check their contents!
 Therefore if you use this option on multiple catalogs that may have some 
shared physical objects in some of their rows, those rows/objects will be 
repeated in the final table.
 In such scenarios, to avoid potential repetition, it is better to use 
@ref{Match} (with @option{--notmatched} and @option{--outcols=AAA,BBB}) instead 
of Table.
 For more on using Match for this scenario, see the description of 
@option{--outcols} in @ref{Invoking astmatch}.
@@ -14438,7 +14434,7 @@ For more on using Match for this scenario, see the 
description of @option{--outc
 @item -X STR
 @itemx --catrowhdu=STR
 The HDU/extension of the FITS file(s) that should be concatenated, or 
appended, by rows with @option{--catrowfile}.
-If @option{--catrowfile} is called more than once with more than one FITS 
file, its necessary to call this option more than once also (once for every 
FITS table given to @option{--catrowfile}).
+If @option{--catrowfile} is called more than once with more than one FITS 
file, it is necessary to call this option more than once also (once for every 
FITS table given to @option{--catrowfile}).
 The HDUs will be loaded in the same order as the FITS files given to 
@option{--catrowfile}.
 
 @item -O
@@ -14446,7 +14442,7 @@ The HDUs will be loaded in the same order as the FITS 
files given to @option{--c
 @cindex Standard output
 Add column metadata when the output is printed in the standard output.
 Usually the standard output is used for a fast visual check, or to pipe into 
other metadata-agnostic programs (like AWK) for further processing.
-So by default meta-data aren't included.
+So by default meta-data are not included.
 But when piping to other Gnuastro programs (where metadata can be interpreted 
and used) it is recommended to use this option and use column names in the next 
program.
 
 @item -r STR,FLT:FLT
@@ -14460,8 +14456,8 @@ For the precedence of this operation in relation to 
others, see @ref{Operation p
 This option can be called multiple times (different ranges for different 
columns) in one run of the Table program.
 This is very useful for selecting the final rows from multiple 
criteria/columns.
 
-The chosen column doesn't have to be in the output columns.
-This is good when you just want to select using one column's values, but don't 
need that column anymore afterwards.
+The chosen column does not have to be in the output columns.
+This is good when you just want to select using one column's values, but do 
not need that column anymore afterwards.
 
 For one example of using this option, see the example under
 @option{--sigclip-median} in @ref{Invoking aststatistics}.
@@ -14471,7 +14467,7 @@ Only return rows where the given coordinates are inside 
the polygon specified by
 The coordinate columns are the given @code{STR1} and @code{STR2} columns, they 
can be a column name or counter (see @ref{Selecting table columns}).
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
-Note that the chosen columns doesn't have to be in the output columns (which 
are specified by the @code{--column} option).
+Note that the chosen columns does not have to be in the output columns (which 
are specified by the @code{--column} option).
 For example if we want to select rows in the polygon specified in @ref{Dataset 
inspection and cropping}, this option can be used like this (you can remove the 
double quotations and write them all in one line if you remove the white-spaces 
around the colon separating the column vertices):
 
 @example
@@ -14511,7 +14507,7 @@ This option can also be called multiple times, so 
@option{--equal=ID,4,5 --equal
 @noindent
 @strong{Equality and floating point numbers:} Floating point numbers are only 
approximate values (see @ref{Numeric data types}).
 In this context, their equality depends on how the input table was originally 
stored (as a plain text table or as an ASCII/binary FITS table).
-If you want to select floating point numbers, it is strongly recommended to 
use the @option{--range} option and set a very small interval around your 
desired number, don't use @option{--equal} or @option{--notequal}.
+If you want to select floating point numbers, it is strongly recommended to 
use the @option{--range} option and set a very small interval around your 
desired number, do not use @option{--equal} or @option{--notequal}.
 @end cartouche
 
 The @option{--equal} and @option{--notequal} options also work when the given 
column has a string type.
@@ -14535,7 +14531,7 @@ $ asttable table.fits --equal=AB,cd\,ef
 @itemx --notequal=STR,INT/FLT,...
 Only output rows that are @emph{not} equal to the given number(s) in the given 
column.
 The first argument is the column identifier (name or number, see 
@ref{Selecting table columns}), after that you can specify any number of values.
-For example @option{--notequal=ID,5,6,8} will only print the rows where the 
@code{ID} column doesn't have value of 5, 6, or 8.
+For example @option{--notequal=ID,5,6,8} will only print the rows where the 
@code{ID} column does not have value of 5, 6, or 8.
 This option can also be called multiple times, so @option{--notequal=ID,4,5 
--notequal=ID,6,7} has the same effect as @option{--notequal=4,5,6,7}.
 
 Be very careful if you want to use the non-equality with floating point 
numbers, see the special note under @option{--equal} for more.
@@ -14553,7 +14549,7 @@ For example if @file{table.fits} has blank values (NaN 
in floating point types)
 If you want @emph{all} columns to be checked, simply set the value to 
@code{_all} (in other words: @option{--noblank=_all}).
 This mode is useful when there are many columns in the table and you want a 
``clean'' output table (with no blank values in any column): entering their 
name or number one-by-one can be buggy and frustrating.
 In this mode, no other column name should be given.
-For example if you give @option{--noblank=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it doesn't, it will abort with an error.
+For example if you give @option{--noblank=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
 
 If you want to change column values using @ref{Column arithmetic} (and set 
some to blank, to later remove), or you want to select rows based on columns 
that you have imported from other tables, you should use the 
@option{--noblankend} option described below.
 Also, see @ref{Operation precedence in Table}.
@@ -14564,8 +14560,8 @@ Sort the output rows based on the values in the 
@code{STR} column (can be a colu
 By default the sort is done in ascending/increasing order, to sort in a 
descending order, use @option{--descending}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
-The chosen column doesn't have to be in the output columns.
-This is good when you just want to sort using one column's values, but don't 
need that column anymore afterwards.
+The chosen column does not have to be in the output columns.
+This is good when you just want to sort using one column's values, but do not 
need that column anymore afterwards.
 
 @item -d
 @itemx --descending
@@ -14580,7 +14576,7 @@ This option cannot be called with @option{--tail}, 
@option{--rowrange} or @optio
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
 @cindex GNU Coreutils
-If the given value to @option{--head} is 0, the output columns won't have any 
rows and if its larger than the number of rows in the input table, all the rows 
are printed (this option is effectively ignored).
+If the given value to @option{--head} is 0, the output columns will not have 
any rows and if it is larger than the number of rows in the input table, all 
the rows are printed (this option is effectively ignored).
 This behavior is taken from the @command{head} program in GNU Coreutils.
 
 @item -t INT
@@ -14604,7 +14600,7 @@ Therefore this option cannot be called with 
@option{--head}, @option{--tail}, or
 @cindex Row selection, by random
 Select @code{INT} rows from the input table by random (assuming a uniform 
distribution).
 This option is applied @emph{after} the value-based selection options (like 
@option{--sort}, @option{--range}, @option{--polygon} and etc).
-On the other hand, only the row counters are randomly selected, this option 
doesn't change the order.
+On the other hand, only the row counters are randomly selected, this option 
does not change the order.
 Therefore, if @option{--rowrandom} is called together with @option{--sort}, 
the returned rows are still sorted.
 This option cannot be called with @option{--head}, @option{--tail}, or 
@option{--rowrange}.
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
@@ -14633,13 +14629,13 @@ For the precedence of this operation in relation to 
others, see @ref{Operation p
 For example if your final output table (possibly after column arithmetic, or 
adding new columns) has blank values (NaN in floating point types) in the 
@code{magnitude} and @code{sn} columns, with @code{--noblankend=magnitude,sn}, 
the output will not contain any rows with blank values in these two columns.
 
 If you want blank values to be removed from the main input table _before_ any 
further processing (like adding columns, sorting or column arithmetic), you 
should use the @option{--noblank} option.
-With the @option{--noblank} option, the column(s) that is(are) given doesn't 
necessarily have to be in the output (it is just temporarily used for reading 
the inputs and selecting rows, but doesn't necessarily need to be present in 
the output).
+With the @option{--noblank} option, the column(s) that is(are) given does not 
necessarily have to be in the output (it is just temporarily used for reading 
the inputs and selecting rows, but does not necessarily need to be present in 
the output).
 However, the column(s) given to this option should exist in the output.
 
 If you want @emph{all} columns to be checked, simply set the value to 
@code{_all} (in other words: @option{--noblankend=_all}).
 This mode is useful when there are many columns in the table and you want a 
``clean'' output table (with no blank values in any column): entering their 
name or number one-by-one can be buggy and frustrating.
 In this mode, no other column name should be given.
-For example if you give @option{--noblankend=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it doesn't, it will abort with an error.
+For example if you give @option{--noblankend=_all,magnitude}, then Table will 
assume that your table actually has a column named @code{_all} and 
@code{magnitude}, and if it does not, it will abort with an error.
 
 This option is applied just before writing the final table (after 
@option{--colmetadata} has finished).
 So in case you changed the column metadata, or added new columns, you can use 
the new names, or the newly defined column numbers.
@@ -14652,12 +14648,12 @@ This option is applied after all other column-related 
operations are complete, f
 For the precedence of this operation in relation to others, see @ref{Operation 
precedence in Table}.
 
 The first value (before the first comma) given to this option is the column's 
identifier.
-It can either be a counter (positive integer, counting from 1), or a name (the 
column's name in the output if this option wasn't called).
+It can either be a counter (positive integer, counting from 1), or a name (the 
column's name in the output if this option was not called).
 
 After the to-be-updated column is identified, at least one other string should 
be given, with a maximum of three strings.
 The first string after the original name will the selected column's new name.
 The next (optional) string will be the selected column's unit and the third 
(optional) will be its comments.
-If the two optional strings aren't given, the original column's units or 
comments will remain unchanged.
+If the two optional strings are not given, the original column's units or 
comments will remain unchanged.
 
 If any of the values contains a comma, you should place a `@code{\}' before 
the comma to avoid it getting confused with a delimiter.
 For example see the command below for a column description that contains a 
comma:
@@ -14722,12 +14718,12 @@ Finally, if you already have a FITS table by other 
means (for example by downloa
 @cindex Astronomical Data Query Language (ADQL)
 There are many astronomical databases available for downloading astronomical 
data.
 Most follow the International Virtual Observatory Alliance (IVOA, 
@url{https://ivoa.net}) standards (and in particular the Table Access Protocol, 
or TAP@footnote{@url{https://ivoa.net/documents/TAP}}).
-With TAP, it is possible to submit your queries via a command-line downloader 
(for example @command{curl}) to only get specific tables, targets (rows in a 
table) or measurements (columns in a table): you don't have to download the 
full table (which can be very large in some cases)!
+With TAP, it is possible to submit your queries via a command-line downloader 
(for example @command{curl}) to only get specific tables, targets (rows in a 
table) or measurements (columns in a table): you do not have to download the 
full table (which can be very large in some cases)!
 These customizations are done through the Astronomical Data Query Language 
(ADQL@footnote{@url{https://ivoa.net/documents/ADQL}}).
 
 Therefore, if you are sufficiently familiar with TAP and ADQL, you can easily 
custom-download any part of an online dataset.
 However, you also need to keep a record of the URLs of each database and in 
many cases, the commands will become long and hard/buggy to type on the 
command-line.
-On the other hand, most astronomers don't know TAP or ADQL at all, and are 
forced to go to the database's web page which is slow (it needs to download so 
many images, and has too much annoying information), requires manual 
interaction (further making it slow and buggy), and can't be automated.
+On the other hand, most astronomers do not know TAP or ADQL at all, and are 
forced to go to the database's web page which is slow (it needs to download so 
many images, and has too much annoying information), requires manual 
interaction (further making it slow and buggy), and cannot be automated.
 
 Gnuastro's Query program is designed to be the middle-man in this process: it 
provides a simple high-level interface to let you specify your constraints on 
what you want to download.
 It then internally constructs the command to download the data based on your 
inputs and runs it to download your desired data.
@@ -14742,7 +14738,7 @@ $ astfits query-output.fits -h0
 With the full command used to download the dataset, you only need a minimal 
knowledge of ADQL to do lower-level customizations on your downloaded dataset.
 You can simply copy that command and change the parts of the query string you 
want: ADQL is very powerful!
 For example you can ask the server to do mathematical operations on the 
columns and apply selections after those operations, or combine/match multiple 
datasets and etc.
-We will try to add high-level interfaces for such capabilities, but generally, 
don't limit yourself to the high-level operations (that can't cover 
everything!).
+We will try to add high-level interfaces for such capabilities, but generally, 
do not limit yourself to the high-level operations (that cannot cover 
everything!).
 
 @menu
 * Available databases::         List of available databases to Query.
@@ -14784,7 +14780,7 @@ For very popular datasets of a database, Query provides 
an easier-to-remember sh
 This short name will map to the officially recognized name of the dataset on 
the server.
 In this mode, Query will also set positional columns accordingly.
 For example most VizieR datasets have an @code{RAJ2000} column (the RA and the 
epoch of 2000) so it is the default RA column name for coordinate search (using 
@option{--center} or @option{--overlapwith}).
-However, some datasets don't have this column (for example SDSS DR12).
+However, some datasets do not have this column (for example SDSS DR12).
 So when you use the short name and Query knows about this dataset, it will 
internally set the coordinate columns that SDSS DR12 has: @code{RA_ICRS} and 
@code{DEC_ICRS}.
 Recall that you can always change the coordinate columns with @option{--ccol}.
 
@@ -14797,7 +14793,7 @@ In the list below that describes the available 
databases, the available short na
 @cartouche
 @noindent
 @strong{Not all datasets support TAP:} Large databases like VizieR have TAP 
access for all their datasets.
-However, smaller databases haven't implemented TAP for all their tables.
+However, smaller databases have not implemented TAP for all their tables.
 Therefore some datasets that are searchable in their web interface may not be 
available for a TAP search.
 To see the full list of TAP-ed datasets in a database, use the 
@option{--information} (or @option{-i}) option with the dataset name like the 
command below.
 
@@ -14806,7 +14802,7 @@ $ astquery astron -i
 @end example
 
 @noindent
-If your desired dataset isn't in this list, but has web-access, contact the 
database maintainers and ask them to add TAP access for it.
+If your desired dataset is not in this list, but has web-access, contact the 
database maintainers and ask them to add TAP access for it.
 After they do it, you should see the name added to the output list of the 
command above.
 @end cartouche
 
@@ -14870,7 +14866,7 @@ A TAP query to @code{ned} is submitted to 
@code{https://ned.ipac.caltech.edu/tap
 @code{extinction}: A command-line interface to the 
@url{https://ned.ipac.caltech.edu/extinction_calculator, NED Extinction 
Calculator}.
 It only takes a central coordinate and returns a VOTable of the calculated 
extinction in many commonly used filters at that point.
 As a result, options like @option{--width} or @option{--radius} are not 
supported.
-However, Gnuastro doesn't yet support the VOTable format.
+However, Gnuastro does not yet support the VOTable format.
 Therefore, if you specify an @option{--output} file, it should have an 
@file{.xml} suffix and the downloaded file will not be checked.
 
 Until VOTable support is added to Gnuastro, you can use GREP, AWK and SED to 
convert the VOTable data into a FITS table with a command like below (assuming 
the queried VOTable is called @file{ned-extinction.xml}):
@@ -14907,7 +14903,7 @@ Almost all published catalogs in major projects, and 
even the tables in many pap
 For example VizieR also has a full copy of the Gaia database mentioned below, 
with some additional standardized columns (like RA and Dec in J2000).
 
 The current implementation of @option{--limitinfo} only looks into the 
description of the datasets, but since VizieR is so large, there is still a lot 
of room for improvement.
-Until then, if @option{--limitinfo} isn't sufficient, you can use VizieR's own 
web-based search for your desired dataset: 
@url{http://cdsarc.u-strasbg.fr/viz-bin/cat}
+Until then, if @option{--limitinfo} is not sufficient, you can use VizieR's 
own web-based search for your desired dataset: 
@url{http://cdsarc.u-strasbg.fr/viz-bin/cat}
 
 
 Because VizieR curates such a diverse set of data from tens of thousands of 
projects and aims for interoperability between them, the column names in VizieR 
may not be identical to the column names in the surveys' own databases (Gaia in 
the example above).
@@ -15034,7 +15030,7 @@ If this option is given, the raw string is directly 
passed to the server and all
 @strong{High-level:}
 With the high-level options (like @option{--column}, @option{--center}, 
@option{--radius}, @option{--range} and other constraining options below), the 
low-level query will be constructed automatically for the particular database.
 This method is only limited to the generic capabilities that Query provides 
for all servers.
-So @option{--query} is more powerful, however, in this mode, you don't need 
any knowledge of the database's query language.
+So @option{--query} is more powerful, however, in this mode, you do not need 
any knowledge of the database's query language.
 You can see the internally generated query on the terminal (if 
@option{--quiet} is not used) or in the 0-th extension of the output (if it is 
a FITS file).
 This full command contains the internally generated query.
 @end itemize
@@ -15043,7 +15039,7 @@ The name of the downloaded output file can be set with 
@option{--output}.
 The requested output format can have any of the @ref{Recognized table formats} 
(currently @file{.txt} or @file{.fits}).
 Like all Gnuastro programs, if the output is a FITS file, the zero-th/first 
HDU of the output will contain all the command-line options given to Query as 
well as the full command used to access the server.
 When @option{--output} is not set, the output name will be in the format of 
@file{NAME-STRING.fits}, where @file{NAME} is the name of the database and 
@file{STRING} is a randomly selected 6-character set of numbers and alphabetic 
characters.
-With this feature, a second run of @command{astquery} that isn't called with 
@option{--output} will not over-write an already downloaded one.
+With this feature, a second run of @command{astquery} that is not called with 
@option{--output} will not over-write an already downloaded one.
 Generally, when calling Query more than once, it is recommended to set an 
output name for each call based on your project's context.
 
 The outputs of Query will have a common output format, irrespective of the 
used database.
@@ -15054,25 +15050,25 @@ This strategy avoids unnecessary surprises depending 
on database.
 For example some databases can download a compressed FITS table, even though 
we ask for FITS.
 But with the strategy above, the final output will be an uncompressed FITS 
file.
 The metadata that is added by Query (including the full download command) is 
also very useful for future usage of the downloaded data.
-Unfortunately many databases don't write the input queries into their 
generated tables.
+Unfortunately many databases do not write the input queries into their 
generated tables.
 
 @table @option
 
 @item --dry-run
-Only print the final download command to contact the server, don't actually 
run it.
+Only print the final download command to contact the server, do not actually 
run it.
 This option is good when you want to check the finally constructed query or 
download options given to the download program.
 You may also want to use the constructed command as a base to do further 
customizations on it and run it yourself.
 
 @item -k
 @itemx --keeprawdownload
-Don't delete the raw downloaded file from the database.
+Do Not delete the raw downloaded file from the database.
 The name of the raw download will have a @file{OUTPUT-raw-download.fits} 
format.
 Where @file{OUTPUT} is either the base-name of the final output file (without 
a suffix).
 
 @item -i
 @itemx --information
 Print the information of all datasets (tables) within a database or all 
columns within a database.
-When @option{--dataset} is specified, the latter mode (all column information) 
is downloaded and printed and when its not defined, all dataset information 
(within the database) is printed.
+When @option{--dataset} is specified, the latter mode (all column information) 
is downloaded and printed and when it is not defined, all dataset information 
(within the database) is printed.
 
 Some databases (like VizieR) contain tens of thousands of datasets, so you can 
limit the downloaded and printed information for available databases with the 
@option{--limitinfo} option (described below).
 Dataset descriptions are often large and contain a lot of text (unlike column 
descriptions).
@@ -15101,7 +15097,7 @@ The queries will generally contain space and other 
meta-characters, so we recomm
 @item -s STR
 @itemx --dataset=STR
 The dataset to query within the database (not compatible with 
@option{--query}).
-This option is mandatory when @option{--query} or @option{--information} 
aren't provided.
+This option is mandatory when @option{--query} or @option{--information} are 
not provided.
 You can see the list of available datasets within a database using 
@option{--information} (possibly supplemented by @option{--limitinfo}).
 The output of @option{--information} will contain the recognized name of the 
datasets within that database.
 You can pass the recognized name directly to this option.
@@ -15113,7 +15109,7 @@ The column name(s) to retrieve from the dataset in the 
given order (not compatib
 If not given, all the dataset's columns for the selected rows will be queried 
(which can be large!).
 This option can take multiple values in one instance (for example 
@option{--column=ra,dec,mag}), or in multiple instances (for example 
@option{-cra -cdec -cmag}), or mixed (for example @option{-cra,dec -cmag}).
 
-In case, you don't know the full list of the dataset's column names a-priori, 
and you don't want to download all the columns (which can greatly decrease your 
download speed), you can use the @option{--information} option combined with 
the @option{--dataset} option, see @ref{Available databases}.
+In case, you do not know the full list of the dataset's column names a-priori, 
and you do not want to download all the columns (which can greatly decrease 
your download speed), you can use the @option{--information} option combined 
with the @option{--dataset} option, see @ref{Available databases}.
 
 @item -H INT
 @itemx --head=INT
@@ -15125,7 +15121,7 @@ This can be good when your search can result a large 
dataset, but before downloa
 File name of FITS file containing an image (in the HDU given by 
@option{--hdu}) to use for identifying the region to query in the give database 
and dataset.
 Based on the image's WCS and pixel size, the sky coverage of the image is 
estimated and values to the @option{--center}, @option{--width} will be 
calculated internally.
 Hence this option cannot be used with @code{--center}, @code{--width} or 
@code{--radius}.
-Also, since it internally generates the query, it can't be used with 
@code{--query}.
+Also, since it internally generates the query, it cannot be used with 
@code{--query}.
 
 Note that if the image has WCS distortions and the reference point for the WCS 
is not within the image, the WCS will not be well-defined.
 Therefore the resulting catalog may not overlap, or correspond to a 
larger/small area in the sky.
@@ -15142,7 +15138,7 @@ The region can either be a circle and the point 
(configured with @option{--radiu
 @item --ccol=STR,STR
 The name of the coordinate-columns in the dataset to compare with the values 
given to @option{--center}.
 Query will use its internal defaults for each dataset (for example 
@code{RAJ2000} and @code{DEJ2000} for VizieR data).
-But each dataset is treated separately and it isn't guaranteed that these 
columns exist in all datasets.
+But each dataset is treated separately and it is not guaranteed that these 
columns exist in all datasets.
 Also, more than one coordinate system/epoch may be present in a dataset and 
you can use this option to construct your spatial constraint based on the 
others coordinate systems/epochs.
 
 @item -r FLT
@@ -15173,7 +15169,7 @@ Then you can edit it to be non-inclusive on your 
desired side.
 
 @item -b STR[,STR]
 @item --noblank=STR[,STR]
-Only ask for rows that don't have a blank value in the @code{STR} column.
+Only ask for rows that do not have a blank value in the @code{STR} column.
 This option can be called many times, and each call can have multiple column 
names (separated by a comma or @key{,}).
 For example if you want the retrieved rows to not have a blank value in 
columns @code{A}, @code{B}, @code{C} and @code{D}, you can use 
@command{--noblank=A -bB,C,D}.
 
@@ -15242,7 +15238,7 @@ Because of this, surveys usually cut the final image 
into separate tiles and sto
 For example the COSMOS survey's Hubble space telescope, ACS F814W image 
consists of 81 separate FITS images, with each one having a volume of 1.7 Giga 
bytes.
 
 @cindex Stitch multiple images
-Even though the tile sizes are chosen to be large enough that too many 
galaxies/targets don't fall on the edges of the tiles, inevitably some do.
+Even though the tile sizes are chosen to be large enough that too many 
galaxies/targets do not fall on the edges of the tiles, inevitably some do.
 So when you simply crop the image of such targets from one tile, you will miss 
a large area of the surrounding sky (which is essential in estimating the 
noise).
 Therefore in its WCS mode, Crop will stitch parts of the tiles that are 
relevant for a target (with the given width) from all the input images that 
cover that region into the output.
 Of course, the tiles have to be present in the list of input files.
@@ -15281,7 +15277,7 @@ For the full list options, please see @ref{Invoking 
astcrop}.
 When the crop is defined by its center, the respective (integer) central pixel 
position will be found internally according to the FITS standard.
 To have this pixel positioned in the center of the cropped region, the final 
cropped region will have an add number of pixels (even if you give an even 
number to @option{--width} in image mode).
 
-Furthermore, when the crop is defined as by its center, Crop allows you to 
only keep crops what don't have any blank pixels in the vicinity of their 
center (your primary target).
+Furthermore, when the crop is defined as by its center, Crop allows you to 
only keep crops what do not have any blank pixels in the vicinity of their 
center (your primary target).
 This can be very convenient when your input catalog/coordinates originated 
from another survey/filter which is not fully covered by your input image, to 
learn more about this feature, please see the description of the 
@option{--checkcenter} option in @ref{Invoking astcrop}.
 
 @table @asis
@@ -15305,7 +15301,7 @@ The given coordinates and width can be any floating 
point number.
 
 @item Vertices of a single crop
 In Image mode there are two options to define the vertices of a region to 
crop: @option{--section} and @option{--polygon}.
-The former is lower-level (doesn't accept floating point vertices, and only a 
rectangular region can be defined), it is also only available in Image mode.
+The former is lower-level (does not accept floating point vertices, and only a 
rectangular region can be defined), it is also only available in Image mode.
 Please see @ref{Crop section syntax} for a full description of this method.
 
 The latter option (@option{--polygon}) is a higher-level method to define any 
polygon (with any number of vertices) with floating point values.
@@ -15347,13 +15343,13 @@ Please see the description of this option in 
@ref{Invoking astcrop} for its synt
 @cartouche
 @noindent
 @strong{CAUTION:} In WCS mode, the image has to be aligned with the celestial 
coordinates, such that the first FITS axis is parallel (opposite direction) to 
the Right Ascension (RA) and the second FITS axis is parallel to the 
declination.
-If these conditions aren't met for an image, Crop will warn you and abort.
+If these conditions are not met for an image, Crop will warn you and abort.
 You can use Warp's @option{--align} option to align the input image with these 
coordinates, see @ref{Warp}.
 @end cartouche
 
 @end table
 
-As a summary, if you don't specify a catalog, you have to define the cropped 
region manually on the command-line.
+As a summary, if you do not specify a catalog, you have to define the cropped 
region manually on the command-line.
 In any case the mode is mandatory for Crop to be able to interpret the values 
given as coordinates or widths.
 
 
@@ -15391,7 +15387,7 @@ When confronted with a @option{*}, Crop will replace it 
with the maximum length
 So @command{*-10:*+10,*-20:*+20} will mean that the crop box will be 
@math{20\times40} pixels in size and only include the top corner of the input 
image with 3/4 of the image being covered by blank pixels, see @ref{Blank 
pixels}.
 
 If you feel more comfortable with space characters between the values, you can 
use as many space characters as you wish, just be careful to put your value in 
double quotes, for example @command{--section="5:200, 123:854"}.
-If you forget the quotes, anything after the first space will not be seen by 
@option{--section} and you will most probably get an error because the rest of 
your string will be read as a filename (which most probably doesn't exist).
+If you forget the quotes, anything after the first space will not be seen by 
@option{--section} and you will most probably get an error because the rest of 
your string will be read as a filename (which most probably does not exist).
 See @ref{Command-line} for a description of how the command-line works.
 
 
@@ -15467,7 +15463,7 @@ For the full list of general options to all Gnuastro 
programs (including Crop),
 
 Floating point numbers can be used to specify the crop region (except the 
@option{--section} option, see @ref{Crop section syntax}).
 In such cases, the floating point values will be used to find the desired 
integer pixel indices based on the FITS standard.
-Hence, Crop ultimately doesn't do any sub-pixel cropping (in other words, it 
doesn't change pixel values).
+Hence, Crop ultimately does not do any sub-pixel cropping (in other words, it 
does not change pixel values).
 If you need such crops, you can use @ref{Warp} to first warp the image to the 
a new pixel grid, then crop from that.
 For example, let's assume you want a crop from pixels 12.982 to 80.982 along 
the first dimension.
 You should first translate the image by @mymath{-0.482} (note that the edge of 
a pixel is at integer multiples of @mymath{0.5}).
@@ -15495,7 +15491,7 @@ This has no effect on each output, see @ref{Reddest 
clumps cutouts and paralleli
 The options can be classified into the following contexts: Input, Output and 
operating mode options.
 Options that are common to all Gnuastro program are listed in @ref{Common 
options} and will not be repeated here.
 
-When you are specifying the crop vertices your self (through 
@option{--section}, or @option{--polygon}) on relatively small regions 
(depending on the resolution of your images) the outputs from image and WCS 
mode can be approximately equivalent.
+When you are specifying the crop vertices yourself (through 
@option{--section}, or @option{--polygon}) on relatively small regions 
(depending on the resolution of your images) the outputs from image and WCS 
mode can be approximately equivalent.
 However, as the crop sizes get large, the curved nature of the WCS coordinates 
have to be considered.
 For example, when using @option{--section}, the right ascension of the bottom 
left and top left corners will not be equal.
 If you only want regions within a given right ascension, use 
@option{--polygon} in WCS mode.
@@ -15554,7 +15550,7 @@ The @code{--width} option also accepts fractions.
 For example if you want the width of your crop to be 3 by 5 arcseconds along 
RA and Dec respectively and you are in wcs-mode, you can use: 
@option{--width=3/3600,5/3600}.
 
 The final output will have an odd number of pixels to allow easy 
identification of the pixel which keeps your requested coordinate (from 
@option{--center} or @option{--catalog}).
-If you want an even sided crop, you can run Crop afterwards with 
@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which 
side you don't need), see @ref{Crop section syntax}.
+If you want an even sided crop, you can run Crop afterwards with 
@option{--section=":*-1,:*-1"} or @option{--section=2:,2:} (depending on which 
side you do not need), see @ref{Crop section syntax}.
 
 The basic reason for making an odd-sided crop is that your given central 
coordinate will ultimately fall within a discrete pixel in the image (defined 
by the FITS standard).
 When the crop has an odd number of pixels in each dimension, that pixel can be 
very well defined as the ``central'' pixel of the crop, making it unambiguously 
easy to identify.
@@ -15608,9 +15604,9 @@ $ astcrop --mode=wcs desired-filter-image(s).fits       
    \
 
 @cindex SAO DS9 region file
 @cindex Region file (SAO DS9)
-More generally, you have an image and want to define the polygon yourself (it 
isn't already published like the example above).
+More generally, you have an image and want to define the polygon yourself (it 
is not already published like the example above).
 As the number of vertices increases, checking the vertex coordinates on a FITS 
viewer (for example SAO DS9) and typing them in, one by one, can be very 
tedious and prone to typo errors.
-In such cases, you can make a polygon ``region'' in DS9 and using your mouse, 
easily define (and visually see) it. Given that SAO DS9 has a graphic user 
interface (GUI), if you don't have the polygon vertices before-hand, it is much 
more easier build your polygon there and pass it onto Crop through the region 
file.
+In such cases, you can make a polygon ``region'' in DS9 and using your mouse, 
easily define (and visually see) it. Given that SAO DS9 has a graphic user 
interface (GUI), if you do not have the polygon vertices before-hand, it is 
much more easier build your polygon there and pass it onto Crop through the 
region file.
 
 You can take the following steps to make an SAO DS9 region file containing 
your polygon.
 Open your desired FITS image with SAO DS9 and activate its ``region'' mode 
with @clicksequence{Edit@click{}Region}.
@@ -15623,7 +15619,7 @@ In the next window, keep format as ``ds9'' and 
``Coordinate System'' as ``fk5''
 A plain text file is now created (let's call it @file{ds9.reg}) which you can 
pass onto Crop with @command{--polygon=ds9.reg}.
 
 For the expected format of the region file, see the description of 
@code{gal_ds9_reg_read_polygon} in @ref{SAO DS9 library}.
-However, since SAO DS9 makes this file for you, you don't usually need to 
worry about its internal format unless something un-expected happens and you 
find a bug.
+However, since SAO DS9 makes this file for you, you do not usually need to 
worry about its internal format unless something un-expected happens and you 
find a bug.
 
 @item --polygonout
 Keep all the regions outside the polygon and mask the inner ones with blank 
pixels (see @ref{Blank pixels}).
@@ -15688,7 +15684,7 @@ The value can be either the column number (starting 
from 1), or a match/search i
 This option can be used both in Image and WCS modes, and not a mandatory.
 When a column is given to this option, the final crop base file name will be 
taken from the contents of this column.
 The directory will be determined by the @option{--output} option (current 
directory if not given) and the value to @option{--suffix} will be appended.
-When this column isn't given, the row number will be used instead.
+When this column is not given, the row number will be used instead.
 
 @end table
 
@@ -15708,12 +15704,12 @@ The units of the value are interpreted based on the 
@option{--mode} value (in WC
 The ultimate checked region size (in pixels) will be an odd integer around the 
center (converted from WCS, or when an even number of pixels are given to this 
option).
 In WCS mode, the value can be given as fractions, for example if the WCS units 
are in degrees, @code{0.1/3600} will correspond to a check size of 0.1 
arcseconds.
 
-Because survey regions don't often have a clean square or rectangle shape, 
some of the pixels on the sides of the survey FITS image don't commonly have 
any data and are blank (see @ref{Blank pixels}).
+Because survey regions do not often have a clean square or rectangle shape, 
some of the pixels on the sides of the survey FITS image do not commonly have 
any data and are blank (see @ref{Blank pixels}).
 So when the catalog was not generated from the input image, it often happens 
that the image does not have data over some of the points.
 
 When the given center of a crop falls in such regions or outside the dataset, 
and this option has a non-zero value, no crop will be created.
 Therefore with this option, you can specify a width of a small box (3 pixels 
is often good enough) around the central pixel of the cropped image.
-You can check which crops were created and which weren't from the command-line 
(if @option{--quiet} was not called, see @ref{Operating mode options}), or in 
Crop's log file (see @ref{Crop output}).
+You can check which crops were created and which were not from the 
command-line (if @option{--quiet} was not called, see @ref{Operating mode 
options}), or in Crop's log file (see @ref{Crop output}).
 
 @item -p STR
 @itemx --suffix=STR
@@ -15763,7 +15759,7 @@ The string given to @option{--output} option will be 
interpreted depending on ho
 @itemize
 @item
 When a catalog is given, the value of the @option{--output} (see @ref{Common 
options}) will be read as the directory to store the output cropped images.
-Hence if it doesn't already exist, Crop will abort with an ``No such file or 
directory'' error.
+Hence if it does not already exist, Crop will abort with an ``No such file or 
directory'' error.
 
 The crop file names will consist of two parts: a variable part (the row number 
of each target starting from 1) along with a fixed string which you can set 
with the @option{--suffix} option.
 Optionally, you may also use the @option{--namecol} option to define a column 
in the input catalog to use as the file name instead of numbers.
@@ -15795,7 +15791,7 @@ The cropped image file name for that row.
 @item
 The number of input images that were used to create that image.
 @item
-A @code{0} if the central few pixels (value to the @option{--checkcenter} 
option) are blank and @code{1} if they aren't.
+A @code{0} if the central few pixels (value to the @option{--checkcenter} 
option) are blank and @code{1} if they are not.
 When the crop was not defined by its center (see @ref{Crop modes}), or 
@option{--checkcenter} was given a value of 0 (see @ref{Invoking astcrop}), the 
center will not be checked and this column will be given a value of @code{-1}.
 @end enumerate
 
@@ -15818,11 +15814,11 @@ value=$(astcrop img.fits --mode=wcs 
--center=1.234,5.678 \
                 --quiet)
 @end example
 
-If a catalog of coordinates is given (that would produce multiple crops; or 
multiple values in this scenario), the solution for a single value won't work!
+If a catalog of coordinates is given (that would produce multiple crops; or 
multiple values in this scenario), the solution for a single value will not 
work!
 Recall that Crop will do the crops in parallel, therefore each time you run 
it, the order of the rows will be different and not correspond to the order of 
the inputs.
 
 To allow identification of each value (which row of the input catalog it 
corresponds to), Crop will first print the name of the would-be created file 
name, and print the value after it (separated by an empty SPACE character).
-In other words, the file in the first column won't actually be created, but 
the value of the pixel it would have contained (if this option wasn't called) 
is printed after it.
+In other words, the file in the first column will not actually be created, but 
the value of the pixel it would have contained (if this option was not called) 
is printed after it.
 @end table
 
 
@@ -15844,7 +15840,7 @@ Therefore the following solutions can be used to fix 
this crash:
 @itemize
 @item
 Decrease the number of threads (at the minimum, set @option{--numthreads=1}).
-Since this solution doesn't attempt to change any of your previous Crop 
command components or doesn't change your local file structure, it is the 
preferred way.
+Since this solution does not attempt to change any of your previous Crop 
command components or does not change your local file structure, it is the 
preferred way.
 
 @item
 Decompress the file (with the command below) and feed the @file{.fits} file 
into Crop without changing the number of threads.
@@ -15909,15 +15905,15 @@ For example, instead of writing @command{5+6}, we 
write @command{5 6 +}.
 The Wikipedia article on the reverse polish notation provides some excellent 
explanation on this notation but here we will give a short summary here for 
self-sufficiency.
 In short, in the reverse polish notation, the operator is placed after the 
operands.
 As we will see below this removes the need to define parenthesis and lets you 
use previous values without needing to define a variable.
-In the future@footnote{@url{https://savannah.gnu.org/task/index.php?13867}} we 
do plan to also optionally allow infix notation when arithmetic operations on 
datasets are desired, but due to time constraints on the developers we can't do 
it immediately.
+In the future@footnote{@url{https://savannah.gnu.org/task/index.php?13867}} we 
do plan to also optionally allow infix notation when arithmetic operations on 
datasets are desired, but due to time constraints on the developers we cannot 
do it immediately.
 
 To easily understand how the reverse polish notation works, you can think of 
each operand (@code{5} and @code{6} in the example above) as a node in a 
``last-in-first-out'' stack.
-One such stack in daily life is a stack of dishes in the kitchen: you put a 
clean dish, on the top of a stack of dishes when its ready for later usage.
+One such stack in daily life is a stack of dishes in the kitchen: you put a 
clean dish, on the top of a stack of dishes when it is ready for later usage.
 Later, when you need a dish, you pick the top one (hence the ``last'' dish 
placed ``in'' the stack is the ``first'' dish that comes ``out'' when 
necessary).
 
 Each operator will need a certain number of operands (in the example above, 
the @code{+} operator needs two operands: @code{5} and @code{6}).
 In the kitchen metaphor, an operator can be an oven.
-Every time an operator is confronted, the operator takes (or ``pops'') the 
number of operands it needs from the top of the stack (so they don't exist in 
the stack any more), does its operation, and places (or ``pushes'') the result 
back on top of the stack.
+Every time an operator is confronted, the operator takes (or ``pops'') the 
number of operands it needs from the top of the stack (so they do not exist in 
the stack any more), does its operation, and places (or ``pushes'') the result 
back on top of the stack.
 So if you want the average of 5 and 6, you would write: @command{5 6 + 2 /}.
 The operations that are done are:
 
@@ -15953,7 +15949,7 @@ Again, you look to your stack of dishes on the table.
 
 You pick up the top one (with value 2 inside of it) and put it in the 
microwave's bottom (denominator) dish holder.
 Then you go back to your stack of dishes on the table and pick up the top dish 
(with value 11 inside of it) and put that in the top (nominator) dish holder.
-The microwave will do its work and when its finished, returns a new dish with 
the single value 5.5 inside of it.
+The microwave will do its work and when it is finished, returns a new dish 
with the single value 5.5 inside of it.
 You pick up the dish from the microwave and place it back on the table.
 
 @item
@@ -16004,14 +16000,14 @@ On the system that this paragraph was written on, the 
floating point and integer
 
 @cartouche
 @noindent
-@strong{If your data doesn't have decimal points, use integer types:} integer 
types are much faster and can take much less space in your storage or RAM 
(while the program is running).
+@strong{If your data does not have decimal points, use integer types:} integer 
types are much faster and can take much less space in your storage or RAM 
(while the program is running).
 @end cartouche
 
 @cartouche
 @noindent
 @strong{Select the smallest width that can host the range/precision of 
values}: For example, if the largest possible value in your dataset is 1000 and 
all numbers are integers, store it as a 16-bit integer.
 Also, if you know the values can never become negative, store it as an 
unsigned 16-bit integer.
-For floating point types, if you know you won't need a precision of more than 
6 significant digits, use the 32-bit floating point type.
+For floating point types, if you know you will not need a precision of more 
than 6 significant digits, use the 32-bit floating point type.
 For more on the range (for integers) and precision (for floats), see 
@ref{Numeric data types}.
 @end cartouche
 
@@ -16044,7 +16040,7 @@ The reason this worked is that @mymath{125} is now 
converted into an unsigned 16
 Since this is larger than an 8-bit integer, the C programming language's 
automatic type conversion will treat both as the wider type and store the 
result of the binary operation (@code{+}) in that type.
 
 For such a basic operation like the command above, a faster hack would be any 
of the two commands below (which are equivalent).
-This is because @code{125.0} or @code{125.} are interpreted as floating-point 
types and they don't suffer from such issues (converting only on one input is 
enough):
+This is because @code{125.0} or @code{125.} are interpreted as floating-point 
types and they do not suffer from such issues (converting only on one input is 
enough):
 
 @example
 $ astarithmetic 125.  10 +
@@ -16056,14 +16052,14 @@ This is because there are only two numbers, and the 
overhead of Arithmetic (read
 However, for large datasets, the @code{uint16} solution will be faster (as you 
saw above), Arithmetic will consume less RAM while running, and the output will 
consume less storage in your system (all major benefits)!
 
 It is possible to do internal checks in Gnuastro and catch integer overflows 
and correct them internally.
-However, we haven't opted for this solution because all those checks will 
consume significant resources and slow down the program (especially with large 
datasets where RAM, storage and running time become important).
+However, we have not opted for this solution because all those checks will 
consume significant resources and slow down the program (especially with large 
datasets where RAM, storage and running time become important).
 To be optimal, we therefore trust that you (the wise Gnuastro user!) make the 
appropriate type conversion in your commands where necessary (recall that the 
operators are available in @ref{Numerical type conversion operators}).
 
 @node Arithmetic operators, Invoking astarithmetic, Integer benefits and 
pitfalls, Arithmetic
 @subsection Arithmetic operators
 
 In this section, list of recognized operators in Arithmetic (and the Table 
program's @ref{Column arithmetic}) and discussed in detail with examples.
-As mentioned before, to be able to easily do complex operations on the 
command-line, the Reverse Polish Notation is used (where you write 
`@mymath{4\quad5\quad+}' instead of `@mymath{4 + 5}'), if you aren't already 
familiar with it, before continuing, please see @ref{Reverse polish notation}.
+As mentioned before, to be able to easily do complex operations on the 
command-line, the Reverse Polish Notation is used (where you write 
`@mymath{4\quad5\quad+}' instead of `@mymath{4 + 5}'), if you are not already 
familiar with it, before continuing, please see @ref{Reverse polish notation}.
 
 The operands to all operators can be a data array (for example a FITS image or 
data cube) or a number, the output will be an array or number according to the 
inputs.
 For example a number multiplied by an array will produce an array.
@@ -16086,7 +16082,7 @@ $ astarithmetic input.fits nan eq 
--output=all-zeros.fits
 
 @noindent
 Note that on the command-line you can write NaN in any case (for example 
@command{NaN}, or @command{NAN} are also acceptable).
-Reading NaN as a floating point number in Gnuastro isn't case-sensitive.
+Reading NaN as a floating point number in Gnuastro is not case-sensitive.
 @end cartouche
 
 @menu
@@ -16204,7 +16200,7 @@ For example each pixel of the output of the command 
below will be the square roo
 $ astarithmetic image.fits sqrt
 @end example
 
-If you just want to scale an image with negative values using this operator 
(for better visual inspection, and the actual values don't matter for you), you 
can subtract the image from its minimum value, then take its square root:
+If you just want to scale an image with negative values using this operator 
(for better visual inspection, and the actual values do not matter for you), 
you can subtract the image from its minimum value, then take its square root:
 
 @example
 $ astarithmetic image.fits image.fits minvalue - sqrt -g1
@@ -16260,7 +16256,7 @@ This operator therefore needs two operands: the first 
popped operand is assumed
 
 For example see the commands below.
 To be more clear, we are using Table's @ref{Column arithmetic} which uses 
exactly the same internal library function as the Arithmetic program for images.
-We are showing the results for four points in the four quadrants of the 2D 
space (if you want to try running them, you don't need to type/copy the parts 
after @key{#}).
+We are showing the results for four points in the four quadrants of the 2D 
space (if you want to try running them, you do not need to type/copy the parts 
after @key{#}).
 The first point (2,2) is in the first quadrant, therefore the returned angle 
is 45 degrees.
 But the second, third and fourth points are in the quadrants of the same 
order, and the returned angles reflect the quadrant.
 
@@ -16298,7 +16294,7 @@ These operators take a single operand.
 @subsubsection Constants
 @cindex @mymath{\pi}
 During your analysis it is often necessary to have certain constants like the 
number @mymath{\pi}.
-The ``operators'' in this section don't actually take any operand, they just 
replace the desired constant into the stack.
+The ``operators'' in this section do not actually take any operand, they just 
replace the desired constant into the stack.
 So in effect, these are actually operands.
 But since their value is not inserted by the user, we have placed them in the 
list of operators.
 
@@ -16375,7 +16371,7 @@ $ astarithmetic 0.1 22.5 counts-to-mag --quiet
 @end example
 
 Of course, you can also convert every pixel in an image (or table column in 
Table's @ref{Column arithmetic}) with this operator if you replace the second 
popped operand with an image/column name.
-For an example of applying this operator on an image, see the description of 
surface brightness in @ref{Brightness flux magnitude}, where we'll convert an 
image's pixel values to surface brightness.
+For an example of applying this operator on an image, see the description of 
surface brightness in @ref{Brightness flux magnitude}, where we will convert an 
image's pixel values to surface brightness.
 
 @item mag-to-counts
 Convert magnitudes to counts (usually CCD outputs) using the given zero point.
@@ -16413,7 +16409,7 @@ See the description of @command{counts-to-sb} for more.
 Convert magnitudes to surface brightness over a certain area (in units of 
arcsec@mymath{^2}).
 The first popped operand is the area and the second is the magnitude.
 For example, let's assume you have a table with the two columns of magnitude 
(called @code{MAG}) and area (called @code{AREAARCSEC2}).
-In the command below, we'll use @ref{Column arithmetic} to return the surface 
brightness.
+In the command below, we will use @ref{Column arithmetic} to return the 
surface brightness.
 @example
 $ asttable table.fits -c'arith MAG AREAARCSEC2 mag-to-sb'
 @end example
@@ -16802,7 +16798,7 @@ Therefore, on the edge, less points will be used in 
calculating the mean.
 
 The final effect of mean filtering is to smooth the input image, it is 
essentially a convolution with a kernel that has identical values for all its 
pixels (is flat), see @ref{Convolution process}.
 
-Note that blank pixels will also be affected by this operator: if there are 
any non-blank elements in the box surrounding a blank pixel, in the filtered 
image, it will have the mean of the non-blank elements, therefore it won't be 
blank any more.
+Note that blank pixels will also be affected by this operator: if there are 
any non-blank elements in the box surrounding a blank pixel, in the filtered 
image, it will have the mean of the non-blank elements, therefore it will not 
be blank any more.
 If blank elements are important for your analysis, you can use the 
@code{isblank} with the @code{where} operator to set them back to blank after 
filtering.
 
 @item filter-median
@@ -16918,7 +16914,7 @@ The popped images have to have the same number of 
pixels along the other dimensi
 The order of the stitching is defined by how they are placed in the 
command-line, not how they are popped (after being popped, they are placed in a 
list in the same order).
 @end itemize
 
-For example, in the commands below, we'll first crop out fixed sized regions 
of @mymath{100\times300} pixels of a larger image (@file{large.fits}) first.
+For example, in the commands below, we will first crop out fixed sized regions 
of @mymath{100\times300} pixels of a larger image (@file{large.fits}) first.
 In the first call of Arithmetic below, we will stitch the bottom set of crops 
together along the first (horizontal) axis.
 In the second Arithmetic call, we will stitch all 6 along both dimensions.
 
@@ -16980,7 +16976,7 @@ Similar to @option{collapse-sum}, but the returned 
dataset will be the mean valu
 @item collapse-number
 Similar to @option{collapse-sum}, but the returned dataset will be the number 
of non-blank values along the collapsed dimension.
 The output will have a 32-bit signed integer type.
-If the input dataset doesn't have blank values, all the elements in the 
returned dataset will have a single value (the length of the collapsed 
dimension).
+If the input dataset does not have blank values, all the elements in the 
returned dataset will have a single value (the length of the collapsed 
dimension).
 Therefore this is mostly relevant when there are blank values in the dataset.
 
 @item collapse-min
@@ -16999,7 +16995,7 @@ This operator currently only works for 2D input 
operands, please contact us if y
 The output's WCS (which should have a different dimensionality compared to the 
inputs) can be read from another file with the @option{--wcsfile} option.
 If no file is specified for the WCS, the first dataset's WCS will be used, you 
can later add/change the necessary WCS keywords with the FITS keyword 
modification features of the Fits program (see @ref{Fits}).
 
-If your datasets don't have the same type, you can use the type transformation 
operators of Arithmetic that are discussed below.
+If your datasets do not have the same type, you can use the type 
transformation operators of Arithmetic that are discussed below.
 Just beware of overflow if you are transforming to a smaller type, see 
@ref{Numeric data types}.
 
 For example, let's assume you have 3 two-dimensional images @file{a.fits}, 
@file{b.fits} and @file{c.fits} (each with @mymath{200\times100} pixels).
@@ -17136,7 +17132,7 @@ The 3rd and 2nd popped operands (@file{modify.fits} and 
@file{binary.fits} respe
 
 The 2nd popped operand (@file{binary.fits}) has to have @code{uint8} (or 
@code{unsigned char} in standard C) type (see @ref{Numeric data types}).
 It is treated as a binary dataset (with only two values: zero and non-zero, 
hence the name @code{binary.fits} in this example).
-However, commonly you won't be dealing with an actual FITS file of a 
condition/binary image.
+However, commonly you will not be dealing with an actual FITS file of a 
condition/binary image.
 You will probably define the condition in the same run based on some other 
reference image and use the conditional and logical operators above to make a 
true/false (or one/zero) image for you internally.
 For example the case below:
 
@@ -17237,7 +17233,7 @@ Afterwards, it will label all those that are connected.
 $ astarithmetic in.fits 100 gt 2 connected-components
 @end example
 
-If your input dataset doesn't have a binary type, but you know all its values 
are 0 or 1, you can use the @code{uint8} operator (below) to convert it to 
binary.
+If your input dataset does not have a binary type, but you know all its values 
are 0 or 1, you can use the @code{uint8} operator (below) to convert it to 
binary.
 
 @item fill-holes
 Flip background (0) pixels surrounded by foreground (1) in a binary dataset.
@@ -17248,7 +17244,7 @@ $ astarithmetic binary.fits 2 fill-holes
 @end example
 
 @item invert
-Invert an unsigned integer dataset (won't work on other data types, see 
@ref{Numeric data types}).
+Invert an unsigned integer dataset (will not work on other data types, see 
@ref{Numeric data types}).
 This is the only operator that ignores blank values (which are set to be the 
maximum values in the unsigned integer types).
 
 This is useful in cases where the target(s) has(have) been imaged in 
absorption as raw formats (which are unsigned integer types).
@@ -17396,7 +17392,7 @@ The operators in this section are designed for that 
purpose.
 Add a fixed noise (Gaussian standard deviation) to each element of the input 
dataset.
 This operator takes two arguments: the top/first popped operand is the noise 
standard deviation, the next popped operand is the dataset that the noise 
should be added to.
 
-When @option{--quiet} isn't given, a statement will be printed on each 
invocation of this operator (if there are multiple calls to the 
@code{mknoise-*}, the statement will be printed multiple times).
+When @option{--quiet} is not given, a statement will be printed on each 
invocation of this operator (if there are multiple calls to the 
@code{mknoise-*}, the statement will be printed multiple times).
 It will show the random number generator function and seed that was used in 
that invocation, see @ref{Generating random numbers}.
 Reproducibility of the outputs can be ensured with the @option{--envseed} 
option, see below for more.
 
@@ -17415,7 +17411,7 @@ $ echo $value
 @end example
 
 You can also use this operator in combination with AWK to easily generate an 
arbitrarily large table with random columns.
-In the example below, we'll create a two column table with 20 rows.
+In the example below, we will create a two column table with 20 rows.
 The first column will be centered on 5 and @mymath{\sigma_1=2}, the second 
will be centered on 10 and @mymath{\sigma_2=3}:
 
 @example
@@ -17454,7 +17450,7 @@ For example @code{astfits image.fits --pixelscale}.
 Except for the noise-model, this operator is very similar to 
@code{mknoise-sigma} and the examples there apply here too.
 The main difference with @code{mknoise-sigma} is that in a Poisson 
distribution the scatter/sigma will depend on each element's value.
 
-For example, let's assume you have made a mock image called @file{mock.fits} 
with @ref{MakeProfiles} and its assumed zero point is 22.5 (for more on the 
zero point, see @ref{Brightness flux magnitude}).
+For example, let's assume you have made a mock image called @file{mock.fits} 
with @ref{MakeProfiles} and it is assumed zero point is 22.5 (for more on the 
zero point, see @ref{Brightness flux magnitude}).
 Let's assume the background level for the Poisson noise has a value of 19 
magnitudes.
 You can first use the @code{mag-to-counts} operator to convert this background 
magnitude into counts, then feed the background value in counts to 
@code{mknoise-poisson} operator:
 
@@ -17488,13 +17484,13 @@ This operator takes three operands:
 @item
 The first popped operand (nearest to the operator) is the histogram values.
 The histogram is a 1-dimensional dataset (a table column) and contains the 
probability of obtaining a certain interval of values.
-The histogram doesn't have to be normalized: the GNU Scientific Library (or 
GSL, which is used by Gnuastro for this operator), will normalize it internally.
+The histogram does not have to be normalized: the GNU Scientific Library (or 
GSL, which is used by Gnuastro for this operator), will normalize it internally.
 The value of each bin (whose probability is given in the histogram) is given 
in the second popped operand.
 Therefore these two operands have to have the same number of rows.
 @item
 The second popped operand is the bin value (mostly the bin center, but it can 
be anything).
 The probability of each bin is defined in the histogram operand (first popped 
operand).
-The bins can have any width (don't have to be evenly spaced), and any order.
+The bins can have any width (do not have to be evenly spaced), and any order.
 Just make sure that the same row in the bins column corresponds to the same 
row in the histogram: the number of rows in the bins and histogram must be 
equal.
 @item
 The third popped operand is the dataset that the random values should be 
written over.
@@ -17554,7 +17550,7 @@ $ echo "" | awk '@{for(i=1;i<5;++i) print 20+i, i*i@}' \
                 > histogram.txt
 @end example
 
-If you don't want the outputs to have exactly the value of the bin identifier, 
but be a randomly selected value from a uniform distribution within the bin, 
you should use @command{random-from-hist} (see below).
+If you do not want the outputs to have exactly the value of the bin 
identifier, but be a randomly selected value from a uniform distribution within 
the bin, you should use @command{random-from-hist} (see below).
 
 As mentioned above, the output will have a double-precision floating point 
type (see @ref{Numeric data types}).
 Therefore, by default each element of the output will consume 8 bytes 
(64-bits) of storage.
@@ -17585,7 +17581,7 @@ This difference can be felt much better for larger 
(real-world) datasets, so be
 
 
 @item random-from-hist
-Similar to @code{random-from-hist-raw}, but don't return the exact bin value, 
instead return a random value from a uniform distribution within each bin.
+Similar to @code{random-from-hist-raw}, but do not return the exact bin value, 
instead return a random value from a uniform distribution within each bin.
 Therefore the following limitations have to be taken into account (compared to 
@code{random-from-hist-raw}):
 @itemize
 @item
@@ -17597,7 +17593,7 @@ The bin widths (distance from one bin to another) have 
to be fixed.
 @end itemize
 
 For a demonstration, let's replace @code{random-from-hist-raw} with 
@code{random-from-hist} in the example of the description of 
@code{random-from-hist-raw}.
-Note how we are manually converting the output of this operator into 
single-precision floating point (32-bit, since the default 64-bit precision is 
statistically meaningless in this scenario and we don't want to waste storage, 
memory and running time):
+Note how we are manually converting the output of this operator into 
single-precision floating point (32-bit, since the default 64-bit precision is 
statistically meaningless in this scenario and we do not want to waste storage, 
memory and running time):
 @example
 $ echo "" | awk '@{for(i=1;i<5;++i) print i, i*i@}' \
                 > histogram.txt
@@ -17636,7 +17632,7 @@ Let's further assume that you want to distribute them 
uniformly over an image of
 So your desired output table should have three columns, the first two are 
pixel positions of each star, and the third is the magnitude.
 
 First, we need to query the Gaia database and ask for all the magnitudes in 
this region of the sky.
-We know that Gaia is not complete for stars fainter than the 20th magnitude, 
so we'll use the @option{--range} option and only ask for those stars that are 
brighter than magnitude 20.
+We know that Gaia is not complete for stars fainter than the 20th magnitude, 
so we will use the @option{--range} option and only ask for those stars that 
are brighter than magnitude 20.
 
 @example
 $ astquery gaia --dataset=dr3 --center=1.23,3.45 --radius=2 \
@@ -17691,7 +17687,7 @@ $ echo "10 4 20" \
 Alternatively if your three values are in separate FITS arrays/images, you can 
use the command below to have the width and height in similarly sized fits 
arrays.
 In this example @file{a.fits} and @file{b.fits} are respectively the major and 
minor axis lengths and @file{pa.fits} is the position angle (in degrees).
 Also, in all three, we assume the first extension is used.
-After its done, the height of the box will be put in @file{h.fits} and the 
width will be in @file{w.fits}.
+After it is done, the height of the box will be put in @file{h.fits} and the 
width will be in @file{w.fits}.
 Just note that because this operator has two output datasets, you need to 
first write the height (top output operand) into a file and free it with the 
@code{tofilefree-} operator, then write the width in the file given to 
@option{--output}.
 
 @example
@@ -17784,7 +17780,7 @@ But as you get more proficient in the reverse polish 
notation, you will find you
 This greatly speeds up your operation, because instead of writing the dataset 
to a file in one command, and reading it in the next command, it will just keep 
the intermediate dataset in memory!
 
 But adding more complexity to your operations, can make them much harder to 
debug, or extend even further.
-Therefore in this section we have some special operators that behave 
differently from the rest: they don't touch the contents of the data, only 
where/how they are stored.
+Therefore in this section we have some special operators that behave 
differently from the rest: they do not touch the contents of the data, only 
where/how they are stored.
 They are designed to do complex operations, without necessarily having a 
complex command.
 
 @table @command
@@ -17794,7 +17790,7 @@ The named dataset will be freed from memory as soon as 
it is no longer needed, o
 This operator thus enables re-usability of a dataset without having to re-read 
it from a file every time it is necessary during a process.
 When a dataset is necessary more than once, this operator can thus help 
simplify reading/writing on the command-line (thus avoiding potential bugs), 
while also speeding up the processing.
 
-Like all operators, this operator pops the top operand off of the main 
processing stack, but unlike other operands, it won't add anything back to the 
stack immediately.
+Like all operators, this operator pops the top operand off of the main 
processing stack, but unlike other operands, it will not add anything back to 
the stack immediately.
 It will keep the popped dataset in memory through a separate list of named 
datasets (not on the main stack).
 That list will be used to add/copy any requested dataset to the main 
processing stack when the name is called.
 
@@ -17807,7 +17803,7 @@ $ astarithmetic image.fits set-a a 2 x
 @end example
 
 The name can be any string, but avoid strings ending with standard filename 
suffixes (for example @file{.fits})@footnote{A dataset name like @file{a.fits} 
(which can be set with @command{set-a.fits}) will cause confusion in the 
initial parser of Arithmetic.
-It will assume this name is a FITS file, and if it is used multiple times, 
Arithmetic will abort, complaining that you haven't provided enough HDUs.}.
+It will assume this name is a FITS file, and if it is used multiple times, 
Arithmetic will abort, complaining that you have not provided enough HDUs.}.
 
 One example of the usefulness of this operator is in the @code{where} operator.
 For example, let's assume you want to mask all pixels larger than @code{5} in 
@file{image.fits} (extension number 1) with a NaN value.
@@ -17835,14 +17831,14 @@ $ astarithmetic image.fits 20 repeat 20 
add-dimension-slow
 
 @item tofile-AAA
 Write the top operand on the operands stack into a file called @code{AAA} (can 
be any FITS file name) without changing the operands stack.
-If you don't need the dataset any more and would like to free it, see the 
@code{tofilefree} operator below.
+If you do not need the dataset any more and would like to free it, see the 
@code{tofilefree} operator below.
 
 By default, any file that is given to this operator is deleted before 
Arithmetic actually starts working on the input datasets.
 The deletion can be deactivated with the @option{--dontdelete} option (as in 
all Gnuastro programs, see @ref{Input output options}).
 If the same FITS file is given to this operator multiple times, it will 
contain multiple extensions (in the same order that it was called.
 
 For example the operator @command{tofile-check.fits} will write the top 
operand to @file{check.fits}.
-Since it doesn't modify the operands stack, this operator is very convenient 
when you want to debug, or understanding, a string of operators and operands 
given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to 
see what is happening behind the scenes without modifying the overall process.
+Since it does not modify the operands stack, this operator is very convenient 
when you want to debug, or understanding, a string of operators and operands 
given to Arithmetic: simply put @command{tofile-AAA} anywhere in the process to 
see what is happening behind the scenes without modifying the overall process.
 
 @item tofilefree-AAA
 Similar to the @code{tofile} operator, with the only difference that the 
dataset that is written to a file is popped from the operand stack and freed 
from memory (cannot be used any more).
@@ -17893,17 +17889,17 @@ $ astarithmetic img1.fits img2.fits img3.fits median  
              \
 
 Arithmetic's notation for giving operands to operators is fully described in 
@ref{Reverse polish notation}.
 The output dataset is last remaining operand on the stack.
-When the output dataset a single number, and @option{--output} isn't called, 
it will be printed on the standard output (command-line).
+When the output dataset a single number, and @option{--output} is not called, 
it will be printed on the standard output (command-line).
 When the output is an array, it will be stored as a file.
 
-The name of the final file can be specified with the @option{--output} option, 
but if its not given (and the output dataset has more than one element), 
Arithmetic will use ``automatic output'' on the name of the first FITS image 
encountered to generate an output file name, see @ref{Automatic output}.
+The name of the final file can be specified with the @option{--output} option, 
but if it is not given (and the output dataset has more than one element), 
Arithmetic will use ``automatic output'' on the name of the first FITS image 
encountered to generate an output file name, see @ref{Automatic output}.
 By default, if the output file already exists, it will be deleted before 
Arithmetic starts operation.
 However, this can be disabled with the @option{--dontdelete} option (see 
below).
 At any point during Arithmetic's operation, you can also write the top operand 
on the stack to a file, using the @code{tofile} or @code{tofilefree} operators, 
see @ref{Arithmetic operators}.
 
 By default, the world coordinate system (WCS) information of the output 
dataset will be taken from the first input image (that contains a WCS) on the 
command-line.
 This can be modified with the @option{--wcsfile} and @option{--wcshdu} options 
described below.
-When the @option{--quiet} option isn't given, the name and extension of the 
dataset used for the output's WCS is printed on the command-line.
+When the @option{--quiet} option is not given, the name and extension of the 
dataset used for the output's WCS is printed on the command-line.
 
 Through operators like those starting with @code{collapse-}, the 
dimensionality of the inputs may not be the same as the outputs.
 By default, when the output is 1D, Arithmetic will write it as a table, not an 
image/array.
@@ -17941,7 +17937,7 @@ FITS Filename containing the WCS structure that must be 
written to the output.
 The HDU/extension should be specified with @option{--wcshdu}.
 
 When this option is used, the respective WCS will be read before any 
processing is done on the command-line and directly used in the final output.
-If the given file doesn't have any WCS, then the default WCS (first file on 
the command-line with WCS) will be used in the output.
+If the given file does not have any WCS, then the default WCS (first file on 
the command-line with WCS) will be used in the output.
 
 This option will mostly be used when the default file (first of the set of 
inputs) is not the one containing your desired WCS.
 But with this option, you can also use Arithmetic to rewrite/change the WCS of 
an existing FITS dataset from another file:
@@ -17965,7 +17961,7 @@ Metadata (name) of the output dataset.
 For a FITS image or table, the string given to this option is written in the 
@code{EXTNAME} or @code{TTYPE1} keyword (respectively).
 
 If this keyword is present in a FITS extension, it will be printed in the 
table output of a command like @command{astfits image.fits} (for images) or 
@command{asttable table.fits -i} (for tables).
-This metadata can be very helpful for yourself in the future (when you have 
forgotten the details), so its recommended to use this option for files that 
should be archived or shared with colleagues.
+This metadata can be very helpful for yourself in the future (when you have 
forgotten the details), so it is recommended to use this option for files that 
should be archived or shared with colleagues.
 
 @item -u STR
 @itemx --metaunit=STR
@@ -17995,9 +17991,9 @@ If the output has more than one dimension, this option 
is redundant.
 
 @item -D
 @itemx --dontdelete
-Don't delete the output file, or files given to the @code{tofile} or 
@code{tofilefree} operators, if they already exist.
+Do Not delete the output file, or files given to the @code{tofile} or 
@code{tofilefree} operators, if they already exist.
 Instead append the desired datasets to the extensions that already exist in 
the respective file.
-Note it doesn't matter if the final output file name is given with the 
@option{--output} option, or determined automatically.
+Note it does not matter if the final output file name is given with the 
@option{--output} option, or determined automatically.
 
 Arithmetic treats this option differently from its default operation in other 
Gnuastro programs (see @ref{Input output options}).
 If the output file exists, when other Gnuastro programs are called with 
@option{--dontdelete}, they simply complain and abort.
@@ -18036,7 +18032,7 @@ The hyphen (@command{-}) can be used both to specify 
options (see @ref{Options})
 In order to enable you to do this, Arithmetic will first parse all the input 
strings and if the first character after a hyphen is a digit, then that hyphen 
is temporarily replaced by the vertical tab character which is not commonly 
used.
 The arguments are then parsed and these strings will not be specified as an 
option.
 Then the given arguments are parsed and any vertical tabs are replaced back 
with a hyphen so they can be read as negative numbers.
-Therefore, as long as the names of the files you want to work on, don't start 
with a vertical tab followed by a digit, there is no problem.
+Therefore, as long as the names of the files you want to work on, do not start 
with a vertical tab followed by a digit, there is no problem.
 An important consequence of this implementation is that you should not write 
negative fractions like this: @command{-.3}, instead write them as 
@command{-0.3}.
 
 @cindex AWK
@@ -18114,12 +18110,9 @@ However this text is written for an understanding on 
the operations that are don
 
 The pixels in an input image represent different ``spatial'' positions, 
therefore when convolution is done only using the actual input pixel values, we 
name the process as being done in the ``Spatial domain''.
 In particular this is in contrast to the ``frequency domain'' that we will 
discuss later in @ref{Frequency domain and Fourier operations}.
-In the spatial domain (and in realistic situations where the image and the 
convolution kernel don't extend to infinity), convolution is the process of 
changing the value of one pixel to the @emph{weighted} average of all the 
pixels in its @emph{neighborhood}.
+In the spatial domain (and in realistic situations where the image and the 
convolution kernel do not extend to infinity), convolution is the process of 
changing the value of one pixel to the @emph{weighted} average of all the 
pixels in its @emph{neighborhood}.
 
-The `neighborhood' of each pixel (how many pixels in which direction) and
-the `weight' function (how much each neighboring pixel should contribute
-depending on its position) are given through a second image which is known
-as a ``kernel''@footnote{Also known as filter, here we will use `kernel'.}.
+The `neighborhood' of each pixel (how many pixels in which direction) and the 
`weight' function (how much each neighboring pixel should contribute depending 
on its position) are given through a second image which is known as a 
``kernel''@footnote{Also known as filter, here we will use `kernel'.}.
 
 @menu
 * Convolution process::         More basic explanations.
@@ -18148,7 +18141,7 @@ When the kernel is symmetric about its center the 
blurred image has the same ori
 However, if the kernel is not symmetric, the image will be affected in the 
opposite manner, this is a natural consequence of the definition of spatial 
filtering.
 In order to avoid this we can rotate the kernel about its center by 180 
degrees so the convolved output can have the same original orientation.
 Technically speaking, only if the kernel is flipped the process is known 
@emph{Convolution}.
-If it isn't it is known as @emph{Correlation}.
+If it is not it is known as @emph{Correlation}.
 
 To be a weighted average, the sum of the weights (the pixels in the kernel) 
have to be unity.
 This will have the consequence that the convolved image of an object and 
unconvolved object will have the same brightness (see @ref{Brightness flux 
magnitude}), which is natural, because convolution should not eat up the object 
photons, it only disperses them.
@@ -18187,7 +18180,7 @@ In making mock images we want to simulate a real 
observation.
 In a real observation the images of the galaxies on the sides of the CCD are 
first blurred by the atmosphere and instrument, then imaged.
 So light from the parts of a galaxy which are immediately outside the CCD will 
affect the parts of the galaxy which are covered by the CCD.
 Therefore in modeling the observation, we have to convolve an image that is 
larger than the input image by exactly half of the convolution kernel.
-We can hence conclude that this correction for the edges is only useful when 
working on actual observed images (where we don't have any more data on the 
edges) and not in modeling.
+We can hence conclude that this correction for the edges is only useful when 
working on actual observed images (where we do not have any more data on the 
edges) and not in modeling.
 
 
 
@@ -18235,7 +18228,7 @@ The main reason we know this process by his name today 
is that he came up with a
 
 @float Figure,epicycle
 
-@c Since these links are long, we had to write them like this so they don't
+@c Since these links are long, we had to write them like this so they do not
 @c jump out of the text width.
 @cindex Qutb al-Din al-Shirazi
 @cindex al-Shirazi, Qutb al-Din
@@ -18258,7 +18251,7 @@ Eventually, as observations became more and more 
precise, it was necessary to ad
 
 @cindex Aristarchus of Samos
 Of course we now know that if they had abdicated the Earth from its throne in 
the center of the heavens and allowed the Sun to take its place, everything 
would become much simpler and true.
-But there wasn't enough observational evidence for changing the ``professional 
consensus'' of the time to this radical view suggested by a small 
minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be one of 
the first people to suggest the Sun being in the center of the universe.
+But there was not enough observational evidence for changing the 
``professional consensus'' of the time to this radical view suggested by a 
small minority@footnote{Aristarchus of Samos (310 -- 230 B.C.) appears to be 
one of the first people to suggest the Sun being in the center of the universe.
 This approach to science (that the standard model is defined by consensus) and 
the fact that this consensus might be completely wrong still applies equally 
well to our models of particle physics and cosmology today.}.
 So the pre-Galilean astronomers chose to keep Earth in the center and find a 
correction to the models (while keeping the heavens a purely ``circular'' 
order).
 
@@ -18846,7 +18839,7 @@ Will be much faster when the image and kernel are both 
large.
 
 @noindent
 As a general rule of thumb, when working on an image of modeled profiles use 
the frequency domain and when working on an image of real (observed) objects 
use the spatial domain (corrected for the edges).
-The reason is that if you apply a frequency domain convolution to a real 
image, you are going to loose information on the edges and generally you don't 
want large kernels.
+The reason is that if you apply a frequency domain convolution to a real 
image, you are going to loose information on the edges and generally you do not 
want large kernels.
 But when you have made the profiles in the image yourself, you can just make a 
larger input image and crop the central parts to completely remove the edge 
effect, see @ref{If convolving afterwards}.
 Also due to oversampling, both the kernels and the images can become very 
large and the speed boost of frequency domain convolution will significantly 
improve the processing time, see @ref{Oversampling}.
 
@@ -18867,7 +18860,7 @@ By default MakeProfiles will make the Gaussian and 
Moffat profiles in a separate
 @item
 ConvertType: You can write your own desired kernel into a text file table and 
convert it to a FITS file with ConvertType, see @ref{ConvertType}.
 Just be careful that the kernel has to have an odd number of pixels along its 
two axes, see @ref{Convolution process}.
-All the programs that do convolution will normalize the kernel internally, so 
if you choose this option, you don't have to worry about normalizing the kernel.
+All the programs that do convolution will normalize the kernel internally, so 
if you choose this option, you do not have to worry about normalizing the 
kernel.
 Only within Convolve, there is an option to disable normalization, see 
@ref{Invoking astconvolve}.
 
 @end itemize
@@ -19379,7 +19372,7 @@ The full list of modular warpings and the other options 
particular to Warp are d
 The values to the warping options (modular warpings as well as 
@option{--matrix}), are a sequence of at least one number.
 Each number in this sequence is separated from the next by a comma (@key{,}).
 Each number can also be written as a single fraction (with a forward-slash 
@key{/} between the numerator and denominator).
-Space and Tab characters are permitted between any two numbers, just don't 
forget to quote the whole value.
+Space and Tab characters are permitted between any two numbers, just do not 
forget to quote the whole value.
 Otherwise, the value will not be fully passed onto the option.
 See the examples above as a demonstration.
 
@@ -19415,7 +19408,7 @@ If you want to rotate about other points, you have to 
translate the warping cent
 Scale the input image by the given factor(s): @mymath{M} and @mymath{N} in 
@ref{Warping basics}.
 If only one value is given, then both image axes will be scaled with the given 
value.
 When two values are given (separated by a comma), the first will be used to 
scale the first axis and the second will be used for the second axis.
-If you only need to scale one axis, use @option{1} for the axis you don't need 
to scale.
+If you only need to scale one axis, use @option{1} for the axis you do not 
need to scale.
 The value(s) can also be written (on the command-line or in configuration 
files) as a fraction.
 
 @item -f FLT[,FLT]
@@ -19671,8 +19664,8 @@ The second one (@code{FILE.txt}) should be replaced 
with the name of the file ge
 \begin@{document@}
 
   You can write a full paper here and include many figures!
-  Describe what the two axises are, and how you measured them.
-  Also, don't forget to explain what it shows and how to interpret it.
+  Describe what the two axes are, and how you measured them.
+  Also, do not forget to explain what it shows and how to interpret it.
   You also have separate PDFs for every figure in the `tikz' directory.
   Feel free to change this text.
 
@@ -19730,7 +19723,7 @@ When called with the @option{--histogram=image} option, 
Statistics will output a
 If you asked for @option{--numbins=N} and @option{--numbins2=M} the image will 
have a size of @mymath{N\times M} pixels (one pixel per 2D bin).
 Also, the FITS image will have a linear WCS that is scaled to the 2D bin size 
along each dimension.
 So when you hover your mouse over any part of the image with a FITS viewer 
(for example SAO DS9), besides the number of points in each pixel, you can 
directly also see ``coordinates'' of the pixels along the two axes.
-You can also use the optimized and fast FITS viewer features for many aspects 
of visually inspecting the distributions (which we won't go into further).
+You can also use the optimized and fast FITS viewer features for many aspects 
of visually inspecting the distributions (which we will not go into further).
 
 @cindex Color-magnitude diagram
 @cindex Diagram, Color-magnitude
@@ -19743,7 +19736,7 @@ $ asttable uvudf_rafelski_2015.fits.gz -i
 @end example
 
 Let's assume you want to find the color to be between the @code{F606W} and 
@code{F775W} filters (roughly corresponding to the g and r filters in 
ground-based imaging).
-However, the original table doesn't have color columns (there would be too 
many combinations!).
+However, the original table does not have color columns (there would be too 
many combinations!).
 Therefore you can use the @ref{Column arithmetic} feature of Gnuastro's Table 
program for deriving a new table with the @code{F775W} magnitude in one column 
and the difference between the @code{F606W} and @code{F775W} on the other 
column.
 With the second command, you can see the actual values if you like.
 
@@ -19775,7 +19768,7 @@ $ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit
 
 @noindent
 With the first command below, you can activate the grid feature of DS9 to 
actually see the coordinate grid, as well as values on each line.
-With the second command, DS9 will even read the labels of the axises and use 
them to generate an almost publication-ready plot.
+With the second command, DS9 will even read the labels of the axes and use 
them to generate an almost publication-ready plot.
 
 @example
 $ ds9 cmd-2d-hist.fits -cmap sls -zoom to fit -grid yes
@@ -19793,9 +19786,9 @@ $ ds9 cmd-2d-hist.fits -cmap sls -zoom 4 -grid yes \
 @cindex PGFPlots (@LaTeX{} package)
 This is good for a fast progress update.
 But for your paper or more official report, you want to show something with 
higher quality.
-For that, you can use the PGFPlots package in @LaTeX{} to add axises in the 
same font as your text, sharp grids and many other elegant/powerful features 
(like over-plotting interesting points, lines and etc).
+For that, you can use the PGFPlots package in @LaTeX{} to add axes in the same 
font as your text, sharp grids and many other elegant/powerful features (like 
over-plotting interesting points, lines and etc).
 But to load the 2D histogram into PGFPlots first you need to convert the FITS 
image into a more standard format, for example PDF.
-We'll use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
+We Will use Gnuastro's @ref{ConvertType} for this, and use the 
@code{sls-inverse} color map (which will map the pixels with a value of zero to 
white):
 
 @example
 $ astconvertt cmd-2d-hist.fits --colormap=sls-inverse \
@@ -20064,7 +20057,7 @@ In astronomical images, signal is mostly photons coming 
of a star or galaxy, and
 Noise is mainly due to counting errors in the detector electronics upon data 
collection.
 @emph{Data} is defined as the combination of signal and noise (so a noisy 
image of a galaxy is one @emph{data}set).
 
-When a dataset doesn't have any signal (for example you take an image with a 
closed shutter, producing an image that only contains noise), the mean, median 
and mode of the distribution are equal within statistical errors.
+When a dataset does not have any signal (for example you take an image with a 
closed shutter, producing an image that only contains noise), the mean, median 
and mode of the distribution are equal within statistical errors.
 Signal from emitting objects, like astronomical targets, always has a positive 
value and will never become negative, see Figure 1 in 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa (2015)}.
 Therefore, when signal is added to the data (you take an image with an open 
shutter pointing to a galaxy for example), the mean, median and mode of the 
dataset shift to the positive, creating a positively skewed distribution.
 The shift of the mean is the largest.
@@ -20079,7 +20072,7 @@ In other words: when the quantile of the mean 
(@mymath{q_{mean}}) is around 0.5.
 This definition of skewness through the quantile of the mean is further 
introduced with a real image the tutorials, see @ref{Skewness caused by signal 
and its measurement}.
 
 @cindex Signal-to-noise ratio
-However, in an astronomical image, some of the pixels will contain more signal 
than the rest, so we can't simply check @mymath{q_{mean}} on the whole dataset.
+However, in an astronomical image, some of the pixels will contain more signal 
than the rest, so we cannot simply check @mymath{q_{mean}} on the whole dataset.
 For example if we only look at the patch of pixels that are placed under the 
central parts of the brightest stars in the field of view, @mymath{q_{mean}} 
will be very high.
 The signal in other parts of the image will be weaker, and in some parts it 
will be much smaller than the noise (for example 1/100-th of the noise level).
 When the signal-to-noise ratio is very small, we can generally assume no 
signal (because its effectively impossible to measure it) and @mymath{q_{mean}} 
will be approximately 0.5.
@@ -20099,8 +20092,8 @@ Therefore, to obtain an even better measure of the 
presence of signal in a tile,
 
 @cindex Cosmic rays
 There is one final hurdle: raw astronomical datasets are commonly peppered 
with Cosmic rays.
-Images of Cosmic rays aren't smoothed by the atmosphere or telescope aperture, 
so they have sharp boundaries.
-Also, since they don't occupy too many pixels, they don't affect the mode and 
median calculation.
+Images of Cosmic rays are not smoothed by the atmosphere or telescope 
aperture, so they have sharp boundaries.
+Also, since they do not occupy too many pixels, they do not affect the mode 
and median calculation.
 But their very high values can greatly bias the calculation of the mean 
(recall how the mean shifts the fastest in the presence of outliers), for 
example see Figure 15 in @url{https://arxiv.org/abs/1505.01664, Akhlaghi and 
Ichikawa (2015)}.
 The effect of outliers like cosmic rays on the mean and standard deviation can 
be removed through @mymath{\sigma}-clipping, see @ref{Sigma clipping} for a 
complete explanation.
 
@@ -20115,20 +20108,20 @@ In some cases, the gradient over these wings can be 
on scales that is larger tha
 
 In such cases, the @mymath{q_{mean}} test will be successful, even though 
there is signal.
 Recall that @mymath{q_{mean}} is a measure of skewness.
-If we don't identify (and thus set to NaN) such outlier tiles before the 
interpolation, the photons of the outskirts of the objects will leak into the 
detection thresholds or Sky and Sky STD measurements and bias our result, see 
@ref{Detecting large extended targets}.
+If we do not identify (and thus set to NaN) such outlier tiles before the 
interpolation, the photons of the outskirts of the objects will leak into the 
detection thresholds or Sky and Sky STD measurements and bias our result, see 
@ref{Detecting large extended targets}.
 Therefore, the final step of ``quantifying signal in a tile'' is to look at 
this distribution of successful tiles and remove the outliers.
 @mymath{\sigma}-clipping is a good solution for removing a few outliers, but 
the problem with outliers of this kind is that there may be many such tiles 
(depending on the large/bright stars/galaxies in the image).
 We therefore apply the following local outlier rejection strategy.
 
 For each tile, we find the nearest @mymath{N_{ngb}} tiles that had a usable 
value (@mymath{N_{ngb}} is the value given to  @option{--interpnumngb}).
-We then sort them and find the difference between the largest and 
second-to-smallest elements (The minimum isn't used because the scatter can be 
large).
+We then sort them and find the difference between the largest and 
second-to-smallest elements (The minimum is not used because the scatter can be 
large).
 Let's call this the tile's @emph{slope} (measured from its neighbors).
 All the tiles that are on a region of flat noise will have similar slope 
values, but if a few tiles fall on the wings of a bright star or large galaxy, 
their slope will be significantly larger than the tiles with no signal.
 We just have to find the smallest tile slope value that is an outlier compared 
to the rest, and reject all tiles with a slope larger than that.
 
 @cindex Outliers
 @cindex Identifying outliers
-To identify the smallest outlier, we'll use the distribution of distances 
between sorted elements.
+To identify the smallest outlier, we will use the distribution of distances 
between sorted elements.
 Let's assume the total number tiles with a good mean-median quantile 
difference is @mymath{N}.
 We sort them in increasing order based on their @emph{slope} (defined above).
 So the array of tile slopes can be written as @mymath{@{s[0], s[1], ..., 
s[N]@}}, where @mymath{s[0]<s[1]} and so on.
@@ -20153,7 +20146,7 @@ Since @mymath{i} begins from the @mymath{N/3}-th 
element in the sorted array (a
 @cindex Nearest-neighbor interpolation
 @cindex Interpolation, nearest-neighbor
 Once the outlying tiles have been successfully identified and set to NaN, we 
use nearest-neighbor interpolation to give a value to all tiles in the image.
-We don't use parametric interpolation methods (like bicubic), because they 
will effectively extrapolate on the edges, creating strong artifacts.
+We do not use parametric interpolation methods (like bicubic), because they 
will effectively extrapolate on the edges, creating strong artifacts.
 Nearest-neighbor interpolation is very simple: for each tile, we find the 
@mymath{N_{ngb}} nearest tiles that had a good value, the tile's value is found 
by estimating the median.
 You can set @mymath{N_{ngb}} through the @option{--interpnumngb} option.
 Once all the tiles are given a value, a smoothing step is implemented to 
remove the sharp value contrast that can happen on the edges of tiles.
@@ -20308,9 +20301,9 @@ Histogram:
  |-----------------------------------------------------------------
 @end example
 
-Gnuastro's Statistics is a very general purpose program, so to be able to 
easily understand this diversity in its operations (and how to possibly run 
them together), we'll divided the operations into two types: those that don't 
respect the position of the elements and those that do (by tessellating the 
input on a tile grid, see @ref{Tessellation}).
+Gnuastro's Statistics is a very general purpose program, so to be able to 
easily understand this diversity in its operations (and how to possibly run 
them together), we will divided the operations into two types: those that do 
not respect the position of the elements and those that do (by tessellating the 
input on a tile grid, see @ref{Tessellation}).
 The former treat the whole dataset as one and can re-arrange all the elements 
(for example sort them), but the former do their processing on each tile 
independently.
-First, we'll review the operations that work on the whole dataset.
+First, we will review the operations that work on the whole dataset.
 
 @cindex AWK
 @cindex GNU AWK
@@ -20387,7 +20380,7 @@ Print the mode of all used elements.
 The mode is found through the mirror distribution which is fully described in 
Appendix C of @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
2015}.
 See that section for a full description.
 
-This mode calculation algorithm is non-parametric, so when the dataset is not 
large enough (larger than about 1000 elements usually), or doesn't have a clear 
mode it can fail.
+This mode calculation algorithm is non-parametric, so when the dataset is not 
large enough (larger than about 1000 elements usually), or does not have a 
clear mode it can fail.
 In such cases, this option will return a value of @code{nan} (for the floating 
point NaN value).
 
 As described in that paper, the easiest way to assess the quality of this mode 
calculation method is to use it's symmetricity (see @option{--modesym} below).
@@ -20434,10 +20427,10 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=3,0.1 \
 @cindex GNU AWK
 You can then use the @option{--range} option of Table (see @ref{Table}) to 
select the proper rows.
 But for that, you need the actual starting and ending values of the range 
(@mymath{m\pm s\sigma}; where @mymath{m} is the median and @mymath{s} is the 
multiple of sigma to define an outlier).
-Therefore, the raw outputs of Statistics in the command above aren't enough.
+Therefore, the raw outputs of Statistics in the command above are not enough.
 
 To get the starting and ending values of the non-outlier range (and put a 
`@key{,}' between them, ready to be used in @option{--range}), pipe the result 
into AWK.
-But in AWK, we'll also need the multiple of @mymath{\sigma}, so we'll define 
it as a shell variable (@code{s}) before calling Statistics (note how @code{$s} 
is used two times now):
+But in AWK, we will also need the multiple of @mymath{\sigma}, so we will 
define it as a shell variable (@code{s}) before calling Statistics (note how 
@code{$s} is used two times now):
 
 @example
 $ s=3
@@ -20446,7 +20439,7 @@ $ aststatistics table.fits -cCOLUMN 
--sclipparams=$s,0.1 \
      | awk '@{s='$s'; printf("%f,%f\n", $1-s*$2, $1+s*$2)@}'
 @end example
 
-To pass it onto Table, we'll need to keep the printed output from the command 
above in another shell variable (@code{r}), not print it.
+To pass it onto Table, we will need to keep the printed output from the 
command above in another shell variable (@code{r}), not print it.
 In Bash, can do this by putting the whole statement within a @code{$()}:
 
 @example
@@ -20619,7 +20612,7 @@ When it is, this option is the last operation before 
the bins are finalized, the
 
 @item --manualbinrange
 Use the values given to the @option{--greaterequal} and @option{--lessthan} to 
define the range of all bin-based calculations like the histogram.
-This option itself doesn't take any value, but just tells the program to use 
the values of those two options instead of the minimum and maximum values of a 
plot.
+This option itself does not take any value, but just tells the program to use 
the values of those two options instead of the minimum and maximum values of a 
plot.
 If any of the two options are not given, then the minimum or maximum will be 
used respectively.
 Therefore, if none of them are called calling this option is redundant.
 
@@ -20644,9 +20637,9 @@ Similar to @option{--onebinstart}, but for the second 
column when a 2D histogram
 
 All the options described until now were from the first class of operations 
discussed above: those that treat the whole dataset as one.
 However, it often happens that the relative position of the dataset elements 
over the dataset is significant.
-For example you don't want one median value for the whole input image, you 
want to know how the median changes over the image.
+For example you do not want one median value for the whole input image, you 
want to know how the median changes over the image.
 For such operations, the input has to be tessellated (see @ref{Tessellation}).
-Thus this class of options can't currently be called along with the options 
above in one run of Statistics.
+Thus this class of options cannot currently be called along with the options 
above in one run of Statistics.
 
 @table @option
 
@@ -20713,7 +20706,7 @@ Hence, in the latter case the value must be an integer.
 value (similar to @option{--sclipparams}).
 
 Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
-The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positive.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we will need to use the distribution of successful 
tiles for removing these false positive.
 For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
@@ -20730,8 +20723,8 @@ It is thus good to smooth the interpolated image so 
strong discontinuities do no
 The smoothing is done through convolution (see @ref{Convolution process}) with 
a flat kernel, so the value to this option must be an odd number.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
-This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
+Do Not set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} is not called.
 
 By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
 But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
@@ -20764,7 +20757,7 @@ In particular, the most scientifically interesting 
astronomical targets are fain
 Therefore when noise is significant, proper detection of your targets is a 
uniquely decisive step in your final scientific analysis/result.
 
 @cindex Erosion
-NoiseChisel is Gnuastro's program for detection of targets that don't have a 
sharp border (almost all astronomical objects).
+NoiseChisel is Gnuastro's program for detection of targets that do not have a 
sharp border (almost all astronomical objects).
 When the targets have sharp edges/borders (for example cells in biological 
imaging), a simple threshold is enough to separate them from noise and each 
other (if they are not touching).
 To detect such sharp-edged targets, you can use Gnuastro's Arithmetic program 
in a command like below (assuming the threshold is @code{100}, see 
@ref{Arithmetic}):
 
@@ -20786,7 +20779,7 @@ Try running it with the @option{--checkdetection} 
option, and open the temporary
 
 @cindex Segmentation
 NoiseChisel's primary output is a binary detection map with the same size as 
the input but its pixels only have two values: 0 (background) and 1 
(foreground).
-Pixels that don't harbor any detected signal (noise) are given a label (or 
value) of zero and those with a value of 1 have been identified as hosting 
signal.
+Pixels that do not harbor any detected signal (noise) are given a label (or 
value) of zero and those with a value of 1 have been identified as hosting 
signal.
 
 Segmentation is the process of classifying the signal into higher-level 
constructs.
 For example if you have two separate galaxies in one image, NoiseChisel will 
give a value of 1 to the pixels of both (each forming an ``island'' of touching 
foreground pixels).
@@ -20798,9 +20791,9 @@ For more on NoiseChisel's output format and its 
benefits (especially in conjunct
 Just note that when that paper was published, Segment was not yet spun-off 
into a separate program, and NoiseChisel done both detection and segmentation.
 
 NoiseChisel's output is designed to be generic enough to be easily used in any 
higher-level analysis.
-If your targets are not touching after running NoiseChisel and you aren't 
interested in their sub-structure, you don't need the Segment program at all.
+If your targets are not touching after running NoiseChisel and you are not 
interested in their sub-structure, you do not need the Segment program at all.
 You can ask NoiseChisel to find the connected pixels in the output with the 
@option{--label} option.
-In this case, the output won't be a binary image any more, the signal will 
have counters/labels starting from 1 for each connected group of pixels.
+In this case, the output will not be a binary image any more, the signal will 
have counters/labels starting from 1 for each connected group of pixels.
 You can then directly feed NoiseChisel's output into MakeCatalog for 
measurements over the detections and the production of a catalog (see 
@ref{MakeCatalog}).
 
 Thanks to the published papers mentioned above, there is no need to provide a 
more complete introduction to NoiseChisel in this book.
@@ -20885,7 +20878,7 @@ $ astnoisechisel --numchannels=4,1 --tilesize=100,100 
input.fits
 
 @cindex Gaussian
 @noindent
-If NoiseChisel is to do processing (for example you don't want to get help, or 
see the values to each input parameter), an input image should be provided with 
the recognized extensions (see @ref{Arguments}).
+If NoiseChisel is to do processing (for example you do not want to get help, 
or see the values to each input parameter), an input image should be provided 
with the recognized extensions (see @ref{Arguments}).
 NoiseChisel shares a large set of common operations with other Gnuastro 
programs, mainly regarding input/output, general processing steps, and general 
operating modes.
 To help in a unified experience between all of Gnuastro's programs, these 
operations have the same command-line options, see @ref{Common options} for a 
full list/description (they are not repeated here).
 
@@ -20967,7 +20960,7 @@ Options to configure the detection are described in 
@ref{Detection options} and
 Finally, in @ref{NoiseChisel output} the format of NoiseChisel's output is 
discussed.
 The order of options here follow the same logical order that the respective 
action takes place within NoiseChisel (note that the output of @option{--help} 
is sorted alphabetically).
 
-Below, we'll discuss NoiseChisel's options, classified into two general 
classes, to help in easy navigation.
+Below, we will discuss NoiseChisel's options, classified into two general 
classes, to help in easy navigation.
 @ref{NoiseChisel input} mainly discusses the basic options relating to inputs 
and prior to the detection process detection.
 Afterwards, @ref{Detection options} fully describes every configuration 
parameter (option) related to detection and how they affect the final result.
 The order of options in this section follow the logical order within 
NoiseChisel.
@@ -21002,7 +20995,7 @@ If not called, for a 2D, image a 2D Gaussian profile 
with a FWHM of 2 pixels tru
 This choice of the default kernel is discussed in Section 3.1.1 of Akhlaghi 
and Ichikawa [2015].
 
 For a 3D cube, when no file name is given to @option{--kernel}, a Gaussian 
with FWHM of 1.5 pixels in the first two dimensions and 0.75 pixels in the 
third dimension will be used.
-The reason for this particular configuration is that commonly in astronomical 
applications, 3D datasets don't have the same nature in all three dimensions, 
commonly the first two dimensions are spatial (RA and Dec) while the third is 
spectral (for example wavelength).
+The reason for this particular configuration is that commonly in astronomical 
applications, 3D datasets do not have the same nature in all three dimensions, 
commonly the first two dimensions are spatial (RA and Dec) while the third is 
spectral (for example wavelength).
 The samplings are also different, in the default case, the spatial sampling is 
assumed to be larger than the spectral sampling, hence a wider FWHM in the 
spatial directions, see @ref{Sampling theorem}.
 
 You can use MakeProfiles to build a kernel with any of its recognized profile 
types and parameters.
@@ -21021,7 +21014,7 @@ HDU containing the kernel in the file given to the 
@option{--kernel}
 option.
 
 @item --convolved=FITS
-Use this file as the convolved image and don't do convolution (ignore 
@option{--kernel}).
+Use this file as the convolved image and do not do convolution (ignore 
@option{--kernel}).
 NoiseChisel will just check the size of the given dataset is the same as the 
input's size.
 If a wrong image (with the same size) is given to this option, the results 
(errors, bugs, etc) are unpredictable.
 So please use this option with care and in a highly controlled environment, 
for example in the scenario discussed below.
@@ -21073,7 +21066,7 @@ The image that is convolved with this kernel is 
@emph{only} used for this purpos
 Once the mode is found to be sufficiently close to the median, the quantile 
threshold is found on the image convolved with the sharper kernel 
(@option{--kernel}), see @option{--qthresh}).
 
 Since convolution will significantly slow down the processing, this feature is 
optional.
-When it isn't given, the image that is convolved with @option{--kernel} will 
be used to identify good tiles @emph{and} apply the quantile threshold.
+When it is not given, the image that is convolved with @option{--kernel} will 
be used to identify good tiles @emph{and} apply the quantile threshold.
 This option is mainly useful in conditions were you have a very large, 
extended, diffuse signal that is still present in the usable tiles when using 
@option{--kernel}.
 See @ref{Detecting large extended targets} for a practical demonstration on 
how to inspect the tiles used in identifying the quantile threshold.
 
@@ -21108,7 +21101,7 @@ The format of the given values is similar to 
@option{--sigmaclip} below.
 In NoiseChisel, outlier rejection on tiles is used when identifying the 
quantile thresholds (@option{--qthresh}, @option{--noerodequant}, and 
@option{detgrowquant}).
 
 Outlier rejection is useful when the dataset contains a large and diffuse 
(almost flat within each tile) signal.
-The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we'll need to use the distribution of successful 
tiles for removing these false positives.
+The flatness of the profile will cause it to successfully pass the mean-median 
quantile difference test, so we will need to use the distribution of successful 
tiles for removing these false positives.
 For more, see the latter half of @ref{Quantifying signal in a tile}.
 
 @item --outliersigma=FLT
@@ -21120,7 +21113,7 @@ For more see @option{--outliersclip} and the latter 
half of @ref{Quantifying sig
 @itemx --qthresh=FLT
 The quantile threshold to apply to the convolved image.
 The detection process begins with applying a quantile threshold to each of the 
tiles in the small tessellation.
-The quantile is only calculated for tiles that don't have any significant 
signal within them, see @ref{Quantifying signal in a tile}.
+The quantile is only calculated for tiles that do not have any significant 
signal within them, see @ref{Quantifying signal in a tile}.
 Interpolation is then used to give a value to the unsuccessful tiles and it is 
finally smoothed.
 
 @cindex Quantile
@@ -21437,12 +21430,12 @@ NoiseChisel's output is a multi-extension FITS file.
 The main extension/dataset is a (binary) detection map.
 It has the same size as the input but with only two possible values for all 
pixels: 0 (for pixels identified as noise) and 1 (for those identified as 
signal/detections).
 The detection map is followed by a Sky and Sky standard deviation dataset 
(which are calculated from the binary image).
-By default (when @option{--rawoutput} isn't called), NoiseChisel will also 
subtract the Sky value from the input and save the sky-subtracted input as the 
first extension in the output with data.
+By default (when @option{--rawoutput} is not called), NoiseChisel will also 
subtract the Sky value from the input and save the sky-subtracted input as the 
first extension in the output with data.
 The zero-th extension (that contains no data), contains NoiseChisel's 
configuration as FITS keywords, see @ref{Output FITS files}.
 
 The name of the output file can be set by giving a value to @option{--output} 
(this is a common option between all programs and is therefore discussed in 
@ref{Input output options}).
-If @option{--output} isn't used, the input name will be suffixed with 
@file{_detected.fits} and used as output, see @ref{Automatic output}.
-If any of the options starting with @option{--check*} are given, NoiseChisel 
won't complete and will abort as soon as the respective check images are 
created.
+If @option{--output} is not used, the input name will be suffixed with 
@file{_detected.fits} and used as output, see @ref{Automatic output}.
+If any of the options starting with @option{--check*} are given, NoiseChisel 
will not complete and will abort as soon as the respective check images are 
created.
 For more information on the different check images, see the description for 
the @option{--check*} options in @ref{Detection options} (this can be disabled 
with @option{--continueaftercheck}).
 
 The last two extensions of the output are the Sky and its Standard deviation, 
see @ref{Sky value} for a complete explanation.
@@ -21468,8 +21461,8 @@ To encourage easier experimentation with the option 
values, when you use any of
 With @option{--continueaftercheck} option, you can disable this behavior and 
ask NoiseChisel to continue with the rest of the processing, even after the 
requested check files are complete.
 
 @item --ignoreblankintiles
-Don't set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
-This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} isn't called.
+Do Not set the input's blank pixels to blank in the tiled outputs (for example 
Sky and Sky standard deviation extensions of the output).
+This is only applicable when the tiled output has the same size as the input, 
in other words, when @option{--oneelempertile} is not called.
 
 By default, blank values in the input (commonly on the edges which are outside 
the survey/field area) will be set to blank in the tiled outputs also.
 But in other scenarios this default behavior is not desired: for example if 
you have masked something in the input, but want the tiled output under that 
also.
@@ -21497,18 +21490,18 @@ $ astarithmetic nc.fits 2 connected-components 
-hDETECTIONS
 @end example
 
 @item --rawoutput
-Don't include the Sky-subtracted input image as the first extension of the 
output.
+Do Not include the Sky-subtracted input image as the first extension of the 
output.
 By default, the Sky-subtracted input is put in the first extension of the 
output.
 The next extensions are NoiseChisel's main outputs described above.
 
-The extra Sky-subtracted input can be convenient in checking NoiseChisel's 
output and comparing the detection map with the input: visually see if 
everything you expected is detected (reasonable completeness) and that you 
don't have too many false detections (reasonable purity).
+The extra Sky-subtracted input can be convenient in checking NoiseChisel's 
output and comparing the detection map with the input: visually see if 
everything you expected is detected (reasonable completeness) and that you do 
not have too many false detections (reasonable purity).
 This visual inspection is simplified if you use SAO DS9 to view NoiseChisel's 
output as a multi-extension data-cube, see @ref{Viewing FITS file contents with 
DS9 or TOPCAT}.
 
-When you are satisfied with your NoiseChisel configuration (therefore you 
don't need to check on every run), or you want to archive/transfer the outputs, 
or the datasets become large, or you are running NoiseChisel as part of a 
pipeline, this Sky-subtracted input image can be a significant burden (take up 
a large volume).
+When you are satisfied with your NoiseChisel configuration (therefore you do 
not need to check on every run), or you want to archive/transfer the outputs, 
or the datasets become large, or you are running NoiseChisel as part of a 
pipeline, this Sky-subtracted input image can be a significant burden (take up 
a large volume).
 The fact that the input is also noisy, makes it hard to compress it 
efficiently.
 
 In such cases, this @option{--rawoutput} can be used to avoid the extra 
sky-subtracted input in the output.
-It is always possible to easily produce the Sky-subtracted dataset from the 
input (assuming it is in extension @code{1} of @file{in.fits}) and the 
@code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) 
with a command like below (assuming NoiseChisel wasn't run with 
@option{--oneelempertile}, see @ref{Tessellation}):
+It is always possible to easily produce the Sky-subtracted dataset from the 
input (assuming it is in extension @code{1} of @file{in.fits}) and the 
@code{SKY} extension of NoiseChisel's output (let's call it @file{nc.fits}) 
with a command like below (assuming NoiseChisel was not run with 
@option{--oneelempertile}, see @ref{Tessellation}):
 
 @example
 $ astarithmetic in.fits nc.fits - -h1 -hSKY
@@ -21547,12 +21540,12 @@ See @ref{NoiseChisel optimization for storage} for an 
example on a real dataset.
 
 Once signal is separated from noise (for example with @ref{NoiseChisel}), you 
have a binary dataset: each pixel is either signal (1) or noise (0).
 Signal (for example every galaxy in your image) has been ``detected'', but all 
detections have a label of 1.
-Therefore while we know which pixels contain signal, we still can't find out 
how many galaxies they contain or which detected pixels correspond to which 
galaxy.
+Therefore while we know which pixels contain signal, we still cannot find out 
how many galaxies they contain or which detected pixels correspond to which 
galaxy.
 At the lowest (most generic) level, detection is a kind of segmentation 
(segmenting the whole dataset into signal and noise, see @ref{NoiseChisel}).
-Here, we'll define segmentation only on signal: to separate sub-structure 
within the detections.
+Here, we will define segmentation only on signal: to separate sub-structure 
within the detections.
 
 @cindex Connected component labeling
-If the targets are clearly separated, or their detected regions aren't 
touching, a simple connected 
components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
 algorithm (very basic segmentation) is enough to separate the regions that are 
touching/connected.
+If the targets are clearly separated, or their detected regions are not 
touching, a simple connected 
components@footnote{@url{https://en.wikipedia.org/wiki/Connected-component_labeling}}
 algorithm (very basic segmentation) is enough to separate the regions that are 
touching/connected.
 This is such a basic and simple form of segmentation that Gnuastro's 
Arithmetic program has an operator for it: see @code{connected-components} in 
@ref{Arithmetic operators}.
 Assuming the binary dataset is called @file{binary.fits}, you can use it with 
a command like this:
 
@@ -21601,7 +21594,7 @@ Those papers cannot be updated any more but the 
software will evolve.
 For example Segment became a separate program (from NoiseChisel) in 2018 
(after those papers were published).
 Therefore this book is the definitive reference.
 @c To help in the transition from those papers to the software you are using, 
see @ref{Segment changes after publication}.
-Finally, in @ref{Invoking astsegment}, we'll discuss Segment's inputs, outputs 
and configuration options.
+Finally, in @ref{Invoking astsegment}, we will discuss Segment's inputs, 
outputs and configuration options.
 
 
 @menu
@@ -21654,7 +21647,7 @@ $ astsegment in.fits --std=0.01 --detection=all 
--clumpsnthresh=10
 
 @cindex Gaussian
 @noindent
-If Segment is to do processing (for example you don't want to get help, or see 
the values of each option), at least one input dataset is necessary along with 
detection and error information, either as separate datasets (per-pixel) or 
fixed values, see @ref{Segment input}.
+If Segment is to do processing (for example you do not want to get help, or 
see the values of each option), at least one input dataset is necessary along 
with detection and error information, either as separate datasets (per-pixel) 
or fixed values, see @ref{Segment input}.
 Segment shares a large set of common operations with other Gnuastro programs, 
mainly regarding input/output, general processing steps, and general operating 
modes.
 To help in a unified experience between all of Gnuastro's programs, these 
common operations have the same names and defined in @ref{Common options}.
 
@@ -21665,7 +21658,7 @@ To inspect the option values without actually running 
Segment, append your comma
 
 To help in easy navigation between Segment's options, they are separately 
discussed in the three sub-sections below: @ref{Segment input} discusses how 
you can customize the inputs to Segment.
 @ref{Segmentation options} is devoted to options specific to the high-level 
segmentation process.
-Finally, in @ref{Segment output}, we'll discuss options that affect Segment's 
output.
+Finally, in @ref{Segment output}, we will discuss options that affect 
Segment's output.
 
 @menu
 * Segment input::               Input files and options.
@@ -21678,17 +21671,17 @@ Finally, in @ref{Segment output}, we'll discuss 
options that affect Segment's ou
 
 Besides the input dataset (for example astronomical image), Segment also needs 
to know the Sky standard deviation and the regions of the dataset that it 
should segment.
 The values dataset is assumed to be Sky subtracted by default.
-If it isn't, you can ask Segment to subtract the Sky internally by calling 
@option{--sky}.
-For the rest of this discussion, we'll assume it is already sky subtracted.
+If it is not, you can ask Segment to subtract the Sky internally by calling 
@option{--sky}.
+For the rest of this discussion, we will assume it is already sky subtracted.
 
 The Sky and its standard deviation can be a single value (to be used for the 
whole dataset) or a separate dataset (for a separate value per pixel).
 If a dataset is used for the Sky and its standard deviation, they must either 
be the size of the input image, or have a single value per tile (generated with 
@option{--oneelempertile}, see @ref{Processing options} and @ref{Tessellation}).
 
 The detected regions/pixels can be specified as a detection map (for example 
see @ref{NoiseChisel output}).
-If @option{--detection=all}, Segment won't read any detection map and assume 
the whole input is a single detection.
+If @option{--detection=all}, Segment will not read any detection map and 
assume the whole input is a single detection.
 For example when the dataset is fully covered by a large nearby 
galaxy/globular cluster.
 
-When dataset are to be used for any of the inputs, Segment will assume they 
are multiple extensions of a single file by default (when @option{--std} or 
@option{--detection} aren't called).
+When dataset are to be used for any of the inputs, Segment will assume they 
are multiple extensions of a single file by default (when @option{--std} or 
@option{--detection} are not called).
 For example NoiseChisel's default output @ref{NoiseChisel output}.
 When the Sky-subtracted values are in one file, and the detection and Sky 
standard deviation are in another, you just need to use @option{--detection}: 
in the absence of @option{--std}, Segment will look for both the detection 
labels and Sky standard deviation in the file given to @option{--detection}.
 Ultimately, if all three are in separate files, you need to call both 
@option{--detection} and @option{--std}.
@@ -21705,7 +21698,7 @@ $ astsegment -P | grep hdu
 
 By default Segment will convolve the input with a kernel to improve the 
signal-to-noise ratio of true peaks.
 If you already have the convolved input dataset, you can pass it directly to 
Segment for faster processing (using the @option{--convolved} and 
@option{--chdu} options).
-Just don't forget that the convolved image must also be Sky-subtracted before 
calling Segment.
+Just do not forget that the convolved image must also be Sky-subtracted before 
calling Segment.
 If a value/file is given to @option{--sky}, the convolved values will also be 
Sky subtracted internally.
 Alternatively, if you prefer to give a kernel (with @option{--kernel} and 
@option{--khdu}), Segment can do the convolution internally.
 To disable convolution, use @option{--kernel=none}.
@@ -21717,9 +21710,9 @@ The Sky value(s) to subtract from the input.
 This option can either be given a constant number or a file name containing a 
dataset (multiple values, per pixel or per tile).
 By default, Segment will assume the input dataset is Sky subtracted, so this 
option is not mandatory.
 
-If the value can't be read as a number, it is assumed to be a file name.
+If the value cannot be read as a number, it is assumed to be a file name.
 When the value is a file, the extension can be specified with 
@option{--skyhdu}.
-When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
+When it is not a single number, the given dataset must either have the same 
size as the output or the same size as the tessellation (so there is one pixel 
per tile, see @ref{Tessellation}).
 
 When this option is given, its value(s) will be subtracted from the input and 
the (optional) convolved dataset (given to @option{--convolved}) prior to 
starting the segmentation process.
 
@@ -21733,9 +21726,9 @@ The Sky standard deviation value(s) corresponding to 
the input.
 The value can either be a constant number or a file name containing a dataset 
(multiple values, per pixel or per tile).
 The Sky standard deviation is mandatory for Segment to operate.
 
-If the value can't be read as a number, it is assumed to be a file name.
+If the value cannot be read as a number, it is assumed to be a file name.
 When the value is a file, the extension can be specified with 
@option{--skyhdu}.
-When its not a single number, the given dataset must either have the same size 
as the output or the same size as the tessellation (so there is one pixel per 
tile, see @ref{Tessellation}).
+When it is not a single number, the given dataset must either have the same 
size as the output or the same size as the tessellation (so there is one pixel 
per tile, see @ref{Tessellation}).
 
 When this option is not called, Segment will assume the standard deviation is 
a dataset and in a HDU/extension (@option{--stdhdu}) of another one of the 
input file(s).
 If a file is given to @option{--detection}, it will assume that file contains 
the standard deviation dataset, otherwise, it will look into input filename 
(the main argument, without any option).
@@ -21793,7 +21786,7 @@ Please see @ref{NoiseChisel input} for a thorough 
discussion of the usefulness a
 If you want to use the same convolution kernel for detection (with 
@ref{NoiseChisel}) and segmentation, with this option, you can use the same 
convolved image (that is also available in NoiseChisel) and avoid two 
convolutions.
 However, just be careful to use the input to NoiseChisel as the input to 
Segment also, then use the @option{--sky} and @option{--std} to specify the Sky 
and its standard deviation (from NoiseChisel's output).
 Recall that when NoiseChisel is not called with @option{--rawoutput}, the 
first extension of NoiseChisel's output is the @emph{Sky-subtracted} input (see 
@ref{NoiseChisel output}).
-So if you use the same convolved image that you fed to NoiseChisel, but use 
NoiseChisel's output with Segment's @option{--convolved}, then the convolved 
image won't be Sky subtracted.
+So if you use the same convolved image that you fed to NoiseChisel, but use 
NoiseChisel's output with Segment's @option{--convolved}, then the convolved 
image will not be Sky subtracted.
 
 @item --chdu
 The HDU/extension containing the convolved image (given to 
@option{--convolved}).
@@ -21914,7 +21907,7 @@ Any method we devise to define `object's over a 
detected region is ultimately su
 Two very distant galaxies or satellites in one halo might lie in the same line 
of sight and be detected as clumps on one detection.
 On the other hand, the connection (through a spiral arm or tidal tail for 
example) between two parts of one galaxy might have such a low surface 
brightness that they are broken up into multiple detections or objects.
 In fact if you have noticed, exactly for this purpose, this is the only Signal 
to noise ratio that the user gives into NoiseChisel.
-The `true' detections and clumps can be objectively identified from the noise 
characteristics of the image, so you don't have to give any hand input Signal 
to noise ratio.
+The `true' detections and clumps can be objectively identified from the noise 
characteristics of the image, so you do not have to give any hand input Signal 
to noise ratio.
 
 @item --checksegmentation
 A file with the suffix @file{_seg.fits} will be created.
@@ -21947,7 +21940,7 @@ $ astfits image_segmented.fits -h0 | grep -i snquant
 
 @cindex DS9
 @cindex SAO DS9
-By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's 
output will also contain the (technically redundant) input dataset and the sky 
standard deviation dataset (if it wasn't a constant number).
+By default, besides the @code{CLUMPS} and @code{OBJECTS} extensions, Segment's 
output will also contain the (technically redundant) input dataset and the sky 
standard deviation dataset (if it was not a constant number).
 This can help in visually inspecting the result when viewing the images as a 
``Multi-extension data cube'' in SAO DS9 for example (see @ref{Viewing FITS 
file contents with DS9 or TOPCAT}).
 You can simply flip through the extensions and see the same region of the 
image and its corresponding clumps/object labels.
 It also makes it easy to feed the output (as one file) into MakeCatalog when 
you intend to make a catalog afterwards (see @ref{MakeCatalog}.
@@ -21958,7 +21951,7 @@ If you want to treat each clump separately, you can 
give a very large value (or
 
 For a complete definition of clumps and objects, please see Section 3.2 of 
@url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]} and 
@ref{Segmentation options}.
 The clumps are ``true'' local maxima (minima if @option{--minima} is called) 
and their surrounding pixels until a local minimum/maximum (caused by noise 
fluctuations, or another ``true'' clump).
-Therefore it may happen that some of the input detections aren't covered by 
clumps at all (very diffuse objects without any strong peak), while some 
objects may contain many clumps.
+Therefore it may happen that some of the input detections are not covered by 
clumps at all (very diffuse objects without any strong peak), while some 
objects may contain many clumps.
 Even in those that have clumps, there will be regions that are too diffuse.
 The diffuse regions (within the input detected regions) are given a negative 
label (-1) to help you separate them from the undetected regions (with a value 
of zero).
 
@@ -21976,12 +21969,12 @@ The options to configure the output of Segment are 
listed below:
 
 @table @option
 @item --continueaftercheck
-Don't abort Segment after producing the check image(s).
+Do Not abort Segment after producing the check image(s).
 The usage of this option is identical to NoiseChisel's 
@option{--continueaftercheck} option (@ref{NoiseChisel input}).
 Please see the descriptions there for more.
 
 @item --noobjects
-Abort Segment after finding true clumps and don't continue with finding 
options.
+Abort Segment after finding true clumps and do not continue with finding 
options.
 Therefore, no @code{OBJECTS} extension will be present in the output.
 Each true clump in @code{CLUMPS} will get a unique label, but diffuse regions 
will still have a negative value.
 
@@ -21994,7 +21987,7 @@ If a detected region contains no clumps or only one 
clump, then it will be fully
 
 @item --rawoutput
 Only write the @code{CLUMPS} and @code{OBJECTS} datasets in the output file.
-Without this option (by default), the first and last extensions of the output 
will the Sky-subtracted input dataset and the Sky standard deviation dataset 
(if it wasn't a number).
+Without this option (by default), the first and last extensions of the output 
will the Sky-subtracted input dataset and the Sky standard deviation dataset 
(if it was not a number).
 When the datasets are small, these redundant extensions can make it convenient 
to inspect the results visually or feed the output to @ref{MakeCatalog} for 
measurements.
 Ultimately both the input and Sky standard deviation datasets are redundant 
(you had them before running Segment).
 When the inputs are large/numerous, these extra dataset can be a burden.
@@ -22073,11 +22066,11 @@ Similarly, the sum of all these pixels will be the 
42nd row in the brightness co
 Pixels with labels equal to, or smaller than, zero will be ignored by 
MakeCatalog.
 In other words, the number of rows in MakeCatalog's output is already known 
before running it (the maximum value of the labeled dataset).
 
-Before getting into the details of running MakeCatalog (in @ref{Invoking 
astmkcatalog}, we'll start with a discussion on the basics of its approach to 
separating detection from measurements in @ref{Detection and catalog 
production}.
+Before getting into the details of running MakeCatalog (in @ref{Invoking 
astmkcatalog}, we will start with a discussion on the basics of its approach to 
separating detection from measurements in @ref{Detection and catalog 
production}.
 A very important factor in any measurement is understanding its validity 
range, or limits.
-Therefore in @ref{Quantifying measurement limits}, we'll discuss how to 
estimate the reliability of the detection and basic measurements.
+Therefore in @ref{Quantifying measurement limits}, we will discuss how to 
estimate the reliability of the detection and basic measurements.
 This section will continue with a derivation of elliptical parameters from the 
labeled datasets in @ref{Measuring elliptical parameters}.
-For those who feel MakeCatalog's existing measurements/columns aren't enough 
and would like to add further measurements, in @ref{Adding new columns to 
MakeCatalog}, a checklist of steps is provided for readily adding your own new 
measurements/columns.
+For those who feel MakeCatalog's existing measurements/columns are not enough 
and would like to add further measurements, in @ref{Adding new columns to 
MakeCatalog}, a checklist of steps is provided for readily adding your own new 
measurements/columns.
 
 @menu
 * Detection and catalog production::  Discussing why/how to treat these 
separately
@@ -22215,7 +22208,7 @@ We can therefore estimate the brightness (@mymath{B_z}, 
in @mymath{\mu{Jy}}) cor
 @cindex SDSS
 Because the image zero point corresponds to a pixel value of @mymath{1}, the 
@mymath{B_z} value calculated above also corresponds to a pixel value of 
@mymath{1}.
 Therefore you simply have to multiply your image by @mymath{B_z} to convert it 
to @mymath{\mu{Jy}}.
-Don't forget that this only applies when your zero point was also estimated in 
the AB magnitude system.
+Do Not forget that this only applies when your zero point was also estimated 
in the AB magnitude system.
 On the command-line, you can estimate this value for a certain zero point with 
AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
 For example let's assume you are using an SDSS image with a zero point of 22.5:
 
@@ -22261,7 +22254,7 @@ Let's derive the equation below:
 @dispmath{m = -2.5\log_{10}(C/t) + Z = -2.5\log_{10}(C) + 2.5\log_{10}(t) + Z}
 Therefore, defining an exposure-time-dependent zero point as @mymath{Z(t)}, we 
can directly correlate a certain object's magnitude with counts after an 
exposure of @mymath{t} seconds:
 @dispmath{m = -2.5\log_{10}(C) + Z(t) \quad\rm{where}\quad Z(t)=Z + 
2.5\log_{10}(t)}
-This solution is useful in programs like @ref{MakeCatalog} or 
@ref{MakeProfiles}, when you can't (or don't want to: because of the extra 
storage/speed costs) manipulate the values image (for example divide it by the 
exposure time to use a counts/sec zero point).
+This solution is useful in programs like @ref{MakeCatalog} or 
@ref{MakeProfiles}, when you cannot (or do not want to: because of the extra 
storage/speed costs) manipulate the values image (for example divide it by the 
exposure time to use a counts/sec zero point).
 
 @item Surface brightness
 @cindex Steradian
@@ -22305,7 +22298,7 @@ And see @ref{FITS images in a publication} for a fully 
working tutorial on how t
 
 @cartouche
 @noindent
-@strong{Don't warp or convolve magnitude or surface brightness images:} 
Warping an image involves calculating new pixel values (of the new pixel grid) 
from the old pixel values.
+@strong{Do Not warp or convolve magnitude or surface brightness images:} 
Warping an image involves calculating new pixel values (of the new pixel grid) 
from the old pixel values.
 Convolution is also a process of finding the weighted mean of pixel values.
 During these processes, many arithmetic operations are done on the original 
pixel values, for example addition or multiplication.
 However, @mymath{log_{10}(a+b)\ne log_{10}(a)+log_{10}(b)}.
@@ -22333,7 +22326,7 @@ Hence, quantifying the detection and measurement 
limitations with a particular d
 In two separate tutorials, we have touched upon some of these points.
 So to see the discussions below in action (on real data), see @ref{Measuring 
the dataset limits} and @ref{Image surface brightness limit}.
 
-Here, we'll review some of the most commonly used methods to quantify the 
limits in astronomical data analysis and how MakeCatalog makes it easy to 
measure them.
+Here, we will review some of the most commonly used methods to quantify the 
limits in astronomical data analysis and how MakeCatalog makes it easy to 
measure them.
 Depending on the higher-level analysis, there are more tests that must be 
done, but these are relatively low-level and usually necessary in most cases.
 In astronomy, it is common to use the magnitude (a unit-less scale) and 
physical units, see @ref{Brightness flux magnitude}.
 Therefore the measurements discussed here are commonly used in units of 
magnitudes.
@@ -22385,7 +22378,7 @@ For more on @mymath{\Delta{A}}, see the description of 
@option{--spatialresoluti
 @dispmath{\Delta{(SB)} = \Delta{M} + \left|{-2.5\over 
ln(10)}\right|\times{\Delta{A}\over{A}}}
 
 In the surface brightness equation mentioned above, @mymath{A} is in units of 
arcsecond squared and the conversion between arcseconds to pixels is a 
multiplication factor.
-Therefore as long as @mymath{A} and @mymath{\Delta{A}} have the same units, it 
doesn't matter if they are in arcseconds or pixels.
+Therefore as long as @mymath{A} and @mymath{\Delta{A}} have the same units, it 
does not matter if they are in arcseconds or pixels.
 Since the measure of spatial resolution (or area error) is the FWHM of the PSF 
which is usually defined in terms of pixels, its more intuitive to use pixels 
for @mymath{A} and @mymath{\Delta{A}}.
 
 @node Completeness limit of each detection, Upper limit magnitude of each 
detection, Surface brightness error of each detection, Quantifying measurement 
limits
@@ -22436,7 +22429,7 @@ The standard deviation (@mymath{\sigma}) of that 
distribution can be used to qua
 @dispmath{M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad 
[mag/target]}
 
 @cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds) and there 
wasn't much processing involved (to make a deep stack).
+Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds) and there was 
not much processing involved (to make a deep stack).
 Hence, the upper limit was synonymous with the surface brightness limit 
discussed above: one value for the whole image.
 The problem with this simplified approach is that the number of pixels in the 
aperture directly affects the final distribution and thus magnitude.
 Also the image correlated noise might actually create certain patterns, so the 
shape of the object can also affect the final result.
@@ -22481,7 +22474,7 @@ The noise-level is known as the dataset's surface 
brightness limit.
 You can think of the noise as muddy water that is completely covering a flat 
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky 
value}.
 Note that this analogy only holds for a flat sky value across the surface of 
the image or ground.}.
 The signal (coming from astronomical objects in real data) will be 
summits/hills that start from the flat sky level (under the muddy water) and 
their summits can sometimes reach above the muddy water.
-Let's assume that in your first observation the muddy water has just been 
stirred and except a few small peaks, you can't see anything through the mud.
+Let's assume that in your first observation the muddy water has just been 
stirred and except a few small peaks, you cannot see anything through the mud.
 As you wait and make more observations/exposures, the mud settles down and the 
@emph{depth} of the transparent water increases.
 As a result, more and more summits become visible and the lower parts of the 
hills (parts with lower surface brightness) can be seen more clearly.
 In this analogy@footnote{Note that this muddy water analogy is not perfect, 
because while the water-level remains the same all over a peak, in data 
analysis, the Poisson noise increases with the level of data.}, height (from 
the ground) is the @emph{surface brightness} and the height of the muddy water 
at the moment you combine your data, is your @emph{surface brightness limit} 
for that moment.
@@ -22496,7 +22489,7 @@ It is recorded in the @code{MEDSTD} keyword of the 
@code{SKY_STD} extension of N
 @cindex Limit, surface brightness
 On different instruments, pixels cover different spatial angles over the sky.
 For example, the width of each pixel on the ACS camera on the Hubble Space 
Telescope (HST) is roughly 0.05 seconds of arc, while the pixels of SDSS are 
each 0.396 seconds of arc (almost eight times wider@footnote{Ground-based 
instruments like the SDSS suffer from strong smoothing due to the atmosphere.
-Therefore, increasing the pixel resolution (or decreasing the width of a 
pixel) won't increase the received information).}).
+Therefore, increasing the pixel resolution (or decreasing the width of a 
pixel) will not increase the received information).}).
 Nevertheless, irrespective of its sky coverage, a pixel is our unit of data 
collection.
 
 To start with, we define the low-level Surface brightness limit or 
@emph{depth}, in units of magnitude/pixel with the equation below (assuming the 
image has zero point magnitude @mymath{z} and we want the @mymath{n}th multiple 
of @mymath{\sigma_m}).
@@ -22598,7 +22591,7 @@ In other words, it quantifies how ``sharp'' the 
object's image is.
 
 @cindex Floating point error
 Before continuing to two dimensions and the derivation of the elliptical 
parameters, let's pause for an important implementation technicality.
-You can ignore this paragraph and the next two if you don't want to implement 
these concepts.
+You can ignore this paragraph and the next two if you do not want to implement 
these concepts.
 The basic definition (first definition of @mymath{\overline{x^2}} above) can 
be used without any major problem.
 However, using this fraction requires two runs over the data: one run to find 
@mymath{\overline{x}} and another run to find @mymath{\overline{x^2}} from 
@mymath{\overline{x}}, this can be slow.
 The advantage of the last fraction above, is that we can estimate both the 
first and second moments in one run (since the @mymath{-\overline{x}^2} term 
can easily be added later).
@@ -22695,7 +22688,7 @@ So if your new calculation, needs new raw information 
from the pixels, then you
 
 In all these different places, the final columns are sorted in the same order 
(same order as @ref{Invoking astmkcatalog}).
 This allows a particular column/option to be easily found in all steps.
-Therefore in adding your new option, be sure to keep it in the same relative 
place in the list in all the separate places (it doesn't necessarily have to be 
in the end), and near conceptually similar options.
+Therefore in adding your new option, be sure to keep it in the same relative 
place in the list in all the separate places (it does not necessarily have to 
be in the end), and near conceptually similar options.
 
 @table @file
 
@@ -22711,7 +22704,7 @@ Afterwards, if your column includes any particular 
settings (you needed to add a
 @item ui.h
 The @code{option_keys_enum} associates a unique value for each option to 
MakeCatalog.
 The options that have a short option version, the single character short 
comment is used for the value.
-Those that don't have a short option version, get a large integer 
automatically.
+Those that do not have a short option version, get a large integer 
automatically.
 You should add a variable here to identify your desired column.
 
 
@@ -22805,7 +22798,7 @@ Finally, in @ref{MakeCatalog output} the output file(s) 
created by MakeCatalog a
 MakeCatalog works by using a localized/labeled dataset (see @ref{MakeCatalog}).
 This dataset maps/labels pixels to a specific target (row number in the final 
catalog) and is thus the only necessary input dataset to produce a minimal 
catalog in any situation.
 Because it only has labels/counters, it must have an integer type (see 
@ref{Numeric data types}), see below if your labels are in a floating point 
container.
-When the requested measurements only need this dataset (for example 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog won't 
read any more datasets.
+When the requested measurements only need this dataset (for example 
@option{--geox}, @option{--geoy}, or @option{--geoarea}), MakeCatalog will not 
read any more datasets.
 
 Low-level measurements that only use the labeled image are rarely sufficient 
for any high-level science case.
 Therefore necessary input datasets depend on the requested columns in each run.
@@ -22817,7 +22810,7 @@ For the full list of available measurements, see 
@ref{MakeCatalog measurements}.
 
 The ``values'' dataset is used for measurements like brightness/magnitude, or 
flux-weighted positions.
 If it is a real image, by default it is assumed to be already Sky-subtracted 
prior to running MakeCatalog.
-If it isn't, you use the @option{--subtractsky} option to, so MakeCatalog 
reads and subtracts the Sky dataset before any processing.
+If it is not, you use the @option{--subtractsky} option to, so MakeCatalog 
reads and subtracts the Sky dataset before any processing.
 To obtain the Sky value, you can use the @option{--sky} option of 
@ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see 
@ref{Sky value}.
 
 MakeCatalog can also do measurements on sub-structures of detections.
@@ -22835,14 +22828,14 @@ When none of the @option{--*file} options are given, 
MakeCatalog will assume the
 When the Sky or Sky standard deviation datasets are necessary and the only 
@option{--*file} option called is @option{--valuesfile}, MakeCatalog will 
search for these datasets (with the default/given HDUs) in the file given to 
@option{--valuesfile} (before looking into the main argument file).
 
 When the clumps image (necessary with the @option{--clumpscat} option) is 
used, MakeCatalog looks into the (possibly existing) @code{NUMLABS} keyword for 
the total number of clumps in the image (irrespective of how many objects there 
are).
-If its not present, it will count them and possibly re-label the clumps so the 
clump labels always start with 1 and finish with the total number of clumps in 
each object.
+If it is not present, it will count them and possibly re-label the clumps so 
the clump labels always start with 1 and finish with the total number of clumps 
in each object.
 The re-labeled clumps image will be stored with the @file{-clumps-relab.fits} 
suffix.
 This can slightly slow-down the run.
 
 Note that @code{NUMLABS} is automatically written by Segment in its outputs, 
so if you are feeding Segment's clump labels, you can benefit from the improved 
speed.
 Otherwise, if you are creating the clumps label dataset manually, it may be 
good to include the @code{NUMLABS} keyword in its header and also be sure that 
there is no gap in the clump labels.
 For example if an object has three clumps, they are labeled as 1, 2, 3.
-If they are labeled as 1, 3, 4, or any other combination of three positive 
integers that aren't an increment of the previous, you might get unknown 
behavior.
+If they are labeled as 1, 3, 4, or any other combination of three positive 
integers that are not an increment of the previous, you might get unknown 
behavior.
 
 It may happen that your labeled objects image was created with a program that 
only outputs floating point files.
 However, you know it only has integer valued pixels that are stored in a 
floating point container.
@@ -22852,7 +22845,7 @@ In such cases, you can use Gnuastro's Arithmetic 
program (see @ref{Arithmetic})
 @command{$ astarithmetic float.fits int32 --output=int.fits}
 @end example
 
-To summarize: if the input file to MakeCatalog is the default/full output of 
Segment (see @ref{Segment output}) you don't have to worry about any of the 
@option{--*file} options below.
+To summarize: if the input file to MakeCatalog is the default/full output of 
Segment (see @ref{Segment output}) you do not have to worry about any of the 
@option{--*file} options below.
 You can just give Segment's output file to MakeCatalog as described in 
@ref{Invoking astmkcatalog}.
 To feed NoiseChisel's output into MakeCatalog, just change the labeled 
dataset's header (with @option{--hdu=DETECTIONS}).
 The full list of input dataset options and general setting options are 
described below.
@@ -22862,7 +22855,7 @@ The full list of input dataset options and general 
setting options are described
 @item -l FITS
 @itemx --clumpsfile=FITS
 The FITS file containing the labeled clumps dataset when @option{--clumpscat} 
is called (see @ref{MakeCatalog output}).
-When @option{--clumpscat} is called, but this option isn't, MakeCatalog will 
look into the main input file (given as an argument) for the required 
extension/HDU (value to @option{--clumpshdu}).
+When @option{--clumpscat} is called, but this option is not, MakeCatalog will 
look into the main input file (given as an argument) for the required 
extension/HDU (value to @option{--clumpshdu}).
 
 @item --clumpshdu=STR
 The HDU/extension of the clump labels dataset.
@@ -22920,7 +22913,7 @@ The dataset given to @option{--instd} (and 
@option{--stdhdu} has the Sky varianc
 Read the input STD image even if it is not required by any of the requested 
columns.
 This is because some of the output catalog's metadata may need it, for example 
to calculate the dataset's surface brightness limit (see @ref{Quantifying 
measurement limits}, configured with @option{--sfmagarea} and 
@option{--sfmagnsigma} in @ref{MakeCatalog output}).
 
-Furthermore, if the input STD image doesn't have the @code{MEDSTD} keyword 
(that is meant to contain the representative standard deviation of the full 
image), with this option, the median will be calculated and used for the 
surface brightness limit.
+Furthermore, if the input STD image does not have the @code{MEDSTD} keyword 
(that is meant to contain the representative standard deviation of the full 
image), with this option, the median will be calculated and used for the 
surface brightness limit.
 
 @item -z FLT
 @itemx --zeropoint=FLT
@@ -22968,7 +22961,7 @@ So with identical inputs, an identical upper-limit 
magnitude will be found.
 However, even if the seed is identical, when the ordering of the object/clump 
labels differs between different runs, the result of upper-limit measurements 
will not be identical.
 
 MakeCatalog will randomly place the object/clump footprint over the dataset.
-When the randomly placed footprint doesn't fall on any object or masked region 
(see @option{--upmaskfile}) it will be used in the final distribution.
+When the randomly placed footprint does not fall on any object or masked 
region (see @option{--upmaskfile}) it will be used in the final distribution.
 Otherwise that particular random position will be ignored and another random 
position will be generated.
 Finally, when the distribution has the desired number of successfully measured 
random samples (@option{--upnum}) the distribution's properties will be 
measured and placed in the catalog.
 
@@ -23512,7 +23505,7 @@ The area (number of pixels) used in the flux weighted 
position calculations.
 @item --geoarea
 The area of all the pixels labeled with an object or clump.
 Note that unlike @option{--area}, pixel values are completely ignored in this 
column.
-For example, if a pixel value is blank, it won't be counted in 
@option{--area}, but will be counted here.
+For example, if a pixel value is blank, it will not be counted in 
@option{--area}, but will be counted here.
 
 @item --geoareaxy
 Similar to @option{--geoarea}, when the clump or object is projected onto the 
first two dimensions.
@@ -23595,7 +23588,7 @@ Output will contain one row for all integers between 1 
and the largest label in
 By default, MakeCatalog's output will only contain rows with integers that 
actually corresponded to at least one pixel in the input dataset.
 
 For example if the input's only labeled pixel values are 11 and 13, 
MakeCatalog's default output will only have two rows.
-If you use this option, it will have 13 rows and all the columns corresponding 
to integer identifiers that didn't correspond to any pixel will be 0 or NaN 
(depending on context).
+If you use this option, it will have 13 rows and all the columns corresponding 
to integer identifiers that did not correspond to any pixel will be 0 or NaN 
(depending on context).
 @end table
 
 
@@ -23604,15 +23597,15 @@ If you use this option, it will have 13 rows and all 
the columns corresponding t
 @subsubsection MakeCatalog output
 After it has completed all the requested measurements (see @ref{MakeCatalog 
measurements}), MakeCatalog will store its measurements in table(s).
 If an output filename is given (see @option{--output} in @ref{Input output 
options}), the format of the table will be deduced from the name.
-When it isn't given, the input name will be appended with a @file{_cat} suffix 
(see @ref{Automatic output}) and its format will be determined from the 
@option{--tableformat} option, which is also discussed in @ref{Input output 
options}.
+When it is not given, the input name will be appended with a @file{_cat} 
suffix (see @ref{Automatic output}) and its format will be determined from the 
@option{--tableformat} option, which is also discussed in @ref{Input output 
options}.
 @option{--tableformat} is also necessary when the requested output name is a 
FITS table (recall that FITS can accept ASCII and binary tables, see 
@ref{Table}).
 
-By default (when @option{--spectrum} or @option{--clumpscat} aren't called) 
only a single catalog/table will be created for the labeled ``objects''.
+By default (when @option{--spectrum} or @option{--clumpscat} are not called) 
only a single catalog/table will be created for the labeled ``objects''.
 
 @itemize
 @item
 if @option{--clumpscat} is called, a secondary catalog/table will also be 
created for ``clumps'' (one of the outputs of the Segment program, for more on 
``objects'' and ``clumps'', see @ref{Segment}).
-In short, if you only have one labeled image, you don't have to worry about 
clumps and just ignore this.
+In short, if you only have one labeled image, you do not have to worry about 
clumps and just ignore this.
 @item
 When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region within a 3D datacube.
 For more, see the description of @option{--spectrum} in @ref{MakeCatalog 
measurements}.
@@ -23663,16 +23656,16 @@ However, in plain text format (see @ref{Gnuastro text 
table format}), it is only
 Therefore, if the output is a text file, two output files will be created, 
ending in @file{_o.txt} (for objects) and @file{_c.txt} (for clumps).
 
 @item --noclumpsort
-Don't sort the clumps catalog based on object ID (only relevant with 
@option{--clumpscat}).
+Do Not sort the clumps catalog based on object ID (only relevant with 
@option{--clumpscat}).
 This option will benefit the performance@footnote{The performance boost due to 
@option{--noclumpsort} can only be felt when there are a huge number of objects.
 Therefore, by default the output is sorted to avoid miss-understandings or 
bugs in the user's scripts when the user forgets to sort the outputs.} of 
MakeCatalog when it is run on multiple threads @emph{and} the position of the 
rows in the clumps catalog is irrelevant (for example you just want the 
number-counts).
 
 MakeCatalog does all its measurements on each @emph{object} independently and 
in parallel.
-As a result, while it is writing the measurements on each object's clumps, it 
doesn't know how many clumps there were in previous objects.
+As a result, while it is writing the measurements on each object's clumps, it 
does not know how many clumps there were in previous objects.
 Each thread will just fetch the first available row and write the information 
of clumps (in order) starting from that row.
-After all the measurements are done, by default (when this option isn't 
called), MakeCatalog will reorder/permute the clumps catalog to have both the 
object and clump ID in an ascending order.
+After all the measurements are done, by default (when this option is not 
called), MakeCatalog will reorder/permute the clumps catalog to have both the 
object and clump ID in an ascending order.
 
-If you would like to order the catalog later (when its a plain text file), you 
can run the following command to sort the rows by object ID (and clump ID 
within each object), assuming they are respectively the first and second 
columns:
+If you would like to order the catalog later (when it is a plain text file), 
you can run the following command to sort the rows by object ID (and clump ID 
within each object), assuming they are respectively the first and second 
columns:
 
 @example
 $ awk '!/^#/' out_c.txt | sort -g -k1,1 -k2,2
@@ -23760,7 +23753,7 @@ Therefore, with a single parsing of both 
simultaneously, for each A-point, we ca
 
 This method has some caveats:
 1) It requires sorting, which can again be slow on large numbers.
-2) It can only be done on a single CPU thread! So it can't benefit from the 
modern CPUs with many threads.
+2) It can only be done on a single CPU thread! So it cannot benefit from the 
modern CPUs with many threads.
 3) There is no way to preserve intermediate information for future matches, 
for example this can greatly help when one of the matched datasets is always 
the same.
 To use this sorting method in Match, use @option{--kdtree=disable}.
 
@@ -23802,7 +23795,7 @@ Therefore, Match first divides the range of the first 
input in all its dimension
 @end table
 
 Above, we described different ways of finding the @mymath{A_i} that is nearest 
to each @mymath{B_j}.
-But this isn't the whole matching process!
+But this is not the whole matching process!
 Let's go ahead with a ``basic'' description of what happens next...
 You may be tempted to remove @mymath{A_i} from the search of matches for 
@mymath{B_k} (where @mymath{k>j}).
 Therefore, as you go down B (and more matches are found), you have to 
calculate less distances (there are fewer elements in A that remain to be 
checked).
@@ -23815,7 +23808,7 @@ But it will get more complex as you add dimensionality.
 For example catalogs derived from 2D images or 3D cubes, where you have 2 and 
3 different coordinates for each point.
 
 To address this problem, in Gnuastro (the Match program, or the matching 
functions of the library) similar to above, we first parse over the elements of 
B.
-But we won't associate the first nearest-neighbor with a match!
+But we will not associate the first nearest-neighbor with a match!
 Instead, we will use an array (with the number of rows in A, let's call it 
``B-in-A'') to keep the list of all nearest element(s) in B that match each 
A-point.
 Once all the points in B are parsed, each A-point in B-in-A will (possibly) 
have a sorted list of B-points (there may be multiple B-points that fall within 
the acceptable aperture of each A-point).
 In the previous example, the @mymath{i} element (corresponding to 
@mymath{A_i}) of B-in-A will contain the following list of B-points: 
@mymath{B_j} and @mymath{B_k}.
@@ -23882,7 +23875,7 @@ $ astmatch input1.txt input2.fits --aperture=1/3600 \
 ## and numeric type), the output will contain all the rows of the
 ## first input, appended with the non-matching rows of the second
 ## input (good when you need to merge multiple catalogs that
-## may have matching items, which you don't want to repeat).
+## may have matching items, which you do not want to repeat).
 $ astmatch input1.fits input2.fits --ccol1=RA,DEC --ccol2=RA,DEC \
            --aperture=1/3600 --notmatched --outcols=_all
 
@@ -23929,7 +23922,7 @@ The output format can be changed with the following 
options:
 @item
 @option{--outcols}: The output will be a single table with rows chosen from 
either of the two inputs in any order.
 @item
-@option{--notmatched}: The output tables will contain the rows that didn't 
match between the two tables.
+@option{--notmatched}: The output tables will contain the rows that did not 
match between the two tables.
 If called with @option{--outcols}, the output will be a single table with all 
non-matched rows of both tables.
 @item
 @option{--logasoutput}: The output will be a single table with the contents of 
the log file, see below.
@@ -23949,7 +23942,7 @@ In this case, the output file (possibly given by the 
@option{--output} option) w
 
 @cartouche
 @noindent
-@strong{@option{--log} isn't thread-safe}: As described above, when 
@option{--logasoutput} is not called, the Log file has a fixed name for all 
calls to Match.
+@strong{@option{--log} is not thread-safe}: As described above, when 
@option{--logasoutput} is not called, the Log file has a fixed name for all 
calls to Match.
 Therefore if a separate log is requested in two simultaneous calls to Match in 
the same directory, Match will try to write to the same file.
 This will cause problems like unreasonable log file, undefined behavior, or a 
crash.
 Remember that @option{--log} is mainly intended for debugging purposes, if you 
want the log file with a specific name, simply use @option{--logasoutput} 
(which will also be faster, since no arranging of the input columns is 
necessary).
@@ -23959,7 +23952,7 @@ Remember that @option{--log} is mainly intended for 
debugging purposes, if you w
 @item -H STR
 @itemx --hdu2=STR
 The extension/HDU of the second input if it is a FITS file.
-When it isn't a FITS file, this option's value is ignored.
+When it is not a FITS file, this option's value is ignored.
 For the first input, the common option @option{--hdu} must be used.
 
 @item -k STR
@@ -23970,16 +23963,16 @@ However, for a much more detailed discussion on 
Match's algorithms with examples
 @table @code
 @item internal
 Construct a k-d tree for the first input internally (within the same run of 
Match), and parallelize over the rows of the second to find the nearest points.
-This is the default algorithm/method used by Match (when this option isn't 
called).
+This is the default algorithm/method used by Match (when this option is not 
called).
 @item build
 Only construct a k-d tree of a single input and abort.
 The name of the k-d tree is value to @option{--output}.
 @item CUSTOM-FITS-FILE
-Use the given FITS file as a k-d tree (that was previously constructed with 
Match itself) of the first input, and don't construct any k-d tree internally.
+Use the given FITS file as a k-d tree (that was previously constructed with 
Match itself) of the first input, and do not construct any k-d tree internally.
 The FITS file should have two columns with an unsigned 32-bit integer data 
type and a @code{KDTROOT} keyword that contains the index of the root of the 
k-d tree.
 For more on Gnuastro's k-d tree format, see @ref{K-d tree}.
 @item disable
-Don't use the k-d tree algorithm for finding the nearest neighbor, instead, 
use the sort-based method.
+Do Not use the k-d tree algorithm for finding the nearest neighbor, instead, 
use the sort-based method.
 @end table
 
 @item --kdtreehdu=STR
@@ -24008,7 +24001,7 @@ $ astmatch a.fits b.fits --outcols=a_all,b5
 @end example
 
 @code{_all} can be used multiple times, possibly on both inputs.
-Tip: if an input's column is called @code{_all} (an unlikely name!) and you 
don't want all the columns from that table the output, use its column number to 
avoid confusion.
+Tip: if an input's column is called @code{_all} (an unlikely name!) and you do 
not want all the columns from that table the output, use its column number to 
avoid confusion.
 
 Another example is given in the one-line examples above.
 Compared to the default case (where two tables with all their columns) are 
saved separately, using this option is much faster: it will only read and 
re-arrange the necessary columns and it will write a single output table.
@@ -24016,9 +24009,9 @@ Combined with regular expressions in large tables, this 
can be a very powerful a
 
 When @option{--coord} is given, no second catalog will be read.
 The second catalog will be created internally based on the values given to 
@option{--coord}.
-So column names aren't defined and you can only request integer column numbers 
that are less than the number of coordinates given to @option{--coord}.
+So column names are not defined and you can only request integer column 
numbers that are less than the number of coordinates given to @option{--coord}.
 For example if you want to find the row matching RA of 1.2345 and Dec of 
6.7890, then you should use @option{--coord=1.2345,6.7890}.
-But when using @option{--outcols}, you can't give @code{bRA}, or @code{b25}.
+But when using @option{--outcols}, you cannot give @code{bRA}, or @code{b25}.
 
 @item With @option{--notmatched}
 Only the column names/numbers should be given (for example 
@option{--outcols=RA,DEC,MAGNITUDE}).
@@ -24048,7 +24041,7 @@ When this option is called, a separate log file will 
not be created and the outp
 @item --notmatched
 Write the non-matching rows into the outputs, not the matched ones.
 By default, this will produce two output tables, that will not necessarily 
have the same number of rows.
-However, when called with @option{--outcols}, its possible to import 
non-matching rows of the second into the first.
+However, when called with @option{--outcols}, it is possible to import 
non-matching rows of the second into the first.
 See the description of @option{--outcols} for more.
 
 @item -c INT/STR[,INT/STR]
@@ -24072,7 +24065,7 @@ When the coordinates are RA and Dec, the 
comma-separated values can either be in
 
 When this option is called, the output changes in the following ways:
 1) when @option{--outcols} is specified, for the second input, it can only 
accept integer numbers that are less than the number of values given to this 
option, see description of that option for more.
-2) By default (when @option{--outcols} isn't used), only the matching row of 
the first table will be output (a single file), not two separate files (one for 
each table).
+2) By default (when @option{--outcols} is not used), only the matching row of 
the first table will be output (a single file), not two separate files (one for 
each table).
 
 This option is good when you have a (large) catalog and only want to match a 
single coordinate to it (for example to find the nearest catalog entry to your 
desired point).
 With this option, you can write the coordinates on the command-line and thus 
avoid the need to make a single-row file.
@@ -24120,7 +24113,7 @@ If multiple matches are found within the ellipse, the 
distance (to find the near
 @end table
 
 @item 3D match
-The aperture (matching volume) can be a sphere, an ellipsoid aligned on the 
three axises or a genenral ellipsoid rotated in any direction.
+The aperture (matching volume) can be a sphere, an ellipsoid aligned on the 
three axes or a genenral ellipsoid rotated in any direction.
 To simplifythe usage, the shape can be determined based on the number of 
values given to this option.
 
 @table @asis
@@ -24254,7 +24247,7 @@ You can skip this subsection if you are already 
sufficiently familiar with these
 @cindex Axis ratio
 @cindex Position angle
 The PSF, see @ref{PSF}, and galaxy radial profiles are generally defined on an 
ellipse.
-Therefore, in this section we'll start defining an ellipse on a pixelated 2D 
surface.
+Therefore, in this section we will start defining an ellipse on a pixelated 2D 
surface.
 Labeling the major axis of an ellipse @mymath{a}, and its minor axis with 
@mymath{b}, the @emph{axis ratio} is defined as: @mymath{q\equiv b/a}.
 The major axis of an ellipse can be aligned in any direction, therefore the 
angle of the major axis with respect to the horizontal axis of the image is 
defined to be the @emph{position angle} of the ellipse and in this book, we 
show it with @mymath{\theta}.
 
@@ -24287,7 +24280,7 @@ In short, when a point is rotated in this order, we 
first rotate it around the Z
 Following the discussion in @ref{Merging multiple warpings}, we can define the 
full rotation with the following matrix multiplication.
 However, here we are rotating the coordinates, not the point.
 Therefore, both the rotation angles and rotation order are reversed.
-We are also not using homogeneous coordinates (see @ref{Warping basics}) since 
we aren't concerned with translation in this context:
+We are also not using homogeneous coordinates (see @ref{Warping basics}) since 
we are not concerned with translation in this context:
 
 @dispmath{\left[\matrix{i_r\cr j_r\cr k_r}\right] =
           \left[\matrix{cos\gamma&sin\gamma&0\cr -sin\gamma&cos\gamma&0\cr     
            0&0&1}\right]
@@ -24406,7 +24399,7 @@ An input image file can also be specified to be used as 
a PSF.
 If the sum of its pixels are not equal to 1, the pixels will be multiplied by 
a fraction so the sum does become 1.
 
 Gnuastro has tools to extract the non-parametric (extended) PSF of any image 
as a FITS file (assuming there are a sufficient number of stars in it), see 
@ref{Building the extended PSF}.
-This method is not perfect (will have noise if you don't have many stars), but 
it is the actual PSF of the data that is not forced into any parametric form.
+This method is not perfect (will have noise if you do not have many stars), 
but it is the actual PSF of the data that is not forced into any parametric 
form.
 @end table
 
 While the Gaussian is only dependent on the FWHM, the Moffat function is also 
dependent on @mymath{\beta}.
@@ -24414,7 +24407,7 @@ Comparing these two functions with a fixed FWHM gives 
the following results:
 
 @itemize
 @item
-Within the FWHM, the functions don't have significant differences.
+Within the FWHM, the functions do not have significant differences.
 @item
 For a fixed FWHM, as @mymath{\beta} increases, the Moffat function becomes 
sharper.
 @item
@@ -24473,7 +24466,7 @@ MacArthur et al.@footnote{MacArthur, L. A., S. 
Courteau, and J. A. Holtzman (200
 @subsubsection Sampling from a function
 
 @cindex Sampling
-A pixel is the ultimate level of accuracy to gather data, we can't get any 
more accurate in one image, this is known as sampling in signal processing.
+A pixel is the ultimate level of accuracy to gather data, we cannot get any 
more accurate in one image, this is known as sampling in signal processing.
 However, the mathematical profiles which describe our models have infinite 
accuracy.
 Over a large fraction of the area of astrophysically interesting profiles (for 
example galaxies or PSFs), the variation of the profile over the area of one 
pixel is not too significant.
 In such cases, the elliptical radius (@mymath{r_{el}}) of the center of the 
pixel can be assigned as the final value of the pixel, (see @ref{Defining an 
ellipse and ellipsoid}).
@@ -24692,7 +24685,7 @@ The value for the profile center in the catalog (the 
@option{--ccol} option) can
 Note that pixel positions in the FITS standard start from 1 and an integer is 
the pixel center.
 So a 2D image actually starts from the position (0.5, 0.5), which is the 
bottom-left corner of the first pixel.
 When a @option{--background} image with WCS information is provided, or you 
specify the WCS parameters with the respective options@footnote{The options to 
set the WCS are the following: @option{--crpix}, @option{--crval}, 
@option{--cdelt}, @option{--cdelt}, @option{--pc}, @option{cunit} and 
@option{ctype}.
-Just recall that these options are only used if @option{--background} is not 
given: if the image you give to @option{--background} doesn't have WCS, these 
options will not be used and you can't use WCS-mode coordinates like RA or 
Dec.}, you may also use RA and Dec to identify the center of each profile (see 
the @option{--mode} option below).
+Just recall that these options are only used if @option{--background} is not 
given: if the image you give to @option{--background} does not have WCS, these 
options will not be used and you cannot use WCS-mode coordinates like RA or 
Dec.}, you may also use RA and Dec to identify the center of each profile (see 
the @option{--mode} option below).
 
 In MakeProfiles, profile centers do not have to be in (overlap with) the final 
image.
 Even if only one pixel of the profile within the truncation radius overlaps 
with the final image size, the profile is built and included in the final image.
@@ -24704,7 +24697,7 @@ So by default, they will not be built in the output 
image but as separate files.
 The sum of pixels of these separate files will also be set to unity (1) so you 
are ready to convolve, see @ref{Convolution process}.
 As a summary, the position and magnitude of PSF profile will be ignored.
 This behavior can be disabled with the @option{--psfinimg} option.
-If you want to create all the profiles separately (with @option{--individual}) 
and you want the sum of the PSF profile pixels to be unity, you have to set 
their magnitudes in the catalog to the zero point magnitude and be sure that 
the central positions of the profiles don't have any fractional part (the PSF 
center has to be in the center of the pixel).
+If you want to create all the profiles separately (with @option{--individual}) 
and you want the sum of the PSF profile pixels to be unity, you have to set 
their magnitudes in the catalog to the zero point magnitude and be sure that 
the central positions of the profiles do not have any fractional part (the PSF 
center has to be in the center of the pixel).
 
 The list of options directly related to the input catalog columns is shown 
below.
 
@@ -24823,7 +24816,7 @@ If @option{--tunitinp} is given, this value is 
interpreted in units of pixels (p
 @subsubsection MakeProfiles profile settings
 
 The profile parameters that differ between each created profile are specified 
through the columns in the input catalog and described in @ref{MakeProfiles 
catalog}.
-Besides those there are general settings for some profiles that don't differ 
between one profile and another, they are a property of the general process.
+Besides those there are general settings for some profiles that do not differ 
between one profile and another, they are a property of the general process.
 For example how many random points to use in the monte-carlo integration, this 
value is fixed for all the profiles.
 The options described in this section are for configuring such properties.
 
@@ -24856,7 +24849,7 @@ Read it as `t-unit-in-p' for `truncation unit in 
pixels'.
 
 @item -f
 @itemx --mforflatpix
-When making fixed value profiles (flat and circumference, see 
`@option{--fcol}'), don't use the value in the column specified by 
`@option{--mcol}' as the magnitude.
+When making fixed value profiles (flat and circumference, see 
`@option{--fcol}'), do not use the value in the column specified by 
`@option{--mcol}' as the magnitude.
 Instead use it as the exact value that all the pixels of these profiles should 
have.
 This option is irrelevant for other types of profiles.
 This option is very useful for creating masks, or labeled regions in an image.
@@ -24867,7 +24860,7 @@ Another useful application of this option is to create 
labeled elliptical or cir
 To do this, set the value in the magnitude column to the label you want for 
this profile.
 This labeled image can then be used in combination with NoiseChisel's output 
(see @ref{NoiseChisel output}) to do aperture photometry with MakeCatalog (see 
@ref{MakeCatalog}).
 
-Alternatively, if you want to mark regions of the image (for example with an 
elliptical circumference) and you don't want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
+Alternatively, if you want to mark regions of the image (for example with an 
elliptical circumference) and you do not want to use NaN values (as explained 
above) for some technical reason, you can get the minimum or maximum value in 
the image @footnote{
 The minimum will give a better result, because the maximum can be too high 
compared to most pixels in the image, making it harder to display.}
 using Arithmetic (see @ref{Arithmetic}), then use that value in the magnitude 
column along with this option for all the profiles.
 
@@ -24882,11 +24875,11 @@ Recall that the total profile magnitude or brightness 
that is specified with in
 See @ref{Profile magnitude} for more on this point.
 
 @item --mcolnocustprof
-Don't touch (re-scale) the custom profile that should be inserted in 
@code{custom-prof} profile (see the description of @option{--fcol} in 
@ref{MakeProfiles catalog} or the description of @option{--customtable} below).
+Do Not touch (re-scale) the custom profile that should be inserted in 
@code{custom-prof} profile (see the description of @option{--fcol} in 
@ref{MakeProfiles catalog} or the description of @option{--customtable} below).
 By default, MakeProfiles will scale (multiply) the custom image's pixels to 
have the desired magnitude (or brightness if @option{--mcolisbrightness} is 
called) in that row.
 
 @item --mcolnocustimg
-Don't touch (re-scale) the custom image that should be inserted in 
@code{custom-img} profile (see the description of @option{--fcol} in 
@ref{MakeProfiles catalog}).
+Do Not touch (re-scale) the custom image that should be inserted in 
@code{custom-img} profile (see the description of @option{--fcol} in 
@ref{MakeProfiles catalog}).
 By default, MakeProfiles will scale (multiply) the custom image's pixels to 
have the desired magnitude (or brightness if @option{--mcolisbrightness} is 
called) in that row.
 
 @item --magatpeak
@@ -25067,7 +25060,7 @@ The different values must be separated by a comma 
(@key{,}).
 The first value identifies the radial function of the profile, either through 
a string or through a number (see description of @option{--fcol} in 
@ref{MakeProfiles catalog}).
 Each radial profile needs a different total number of parameters: S@'ersic and 
Moffat functions need 3 parameters: radial, S@'ersic index or Moffat 
@mymath{\beta}, and truncation radius.
 The Gaussian function needs two parameters: radial and truncation radius.
-The point function doesn't need any parameters and flat and circumference 
profiles just need one parameter (truncation radius).
+The point function does not need any parameters and flat and circumference 
profiles just need one parameter (truncation radius).
 
 The PSF or kernel is a unique (and highly constrained) type of profile: the 
sum of its pixels must be one, its center must be the center of the central 
pixel (in an image with an odd number of pixels on each side), and commonly it 
is circular, so its axis ratio and position angle are one and zero respectively.
 Kernels are commonly necessary for various data analysis and data manipulation 
steps (for example see @ref{Convolve}, and @ref{NoiseChisel}.
@@ -25086,13 +25079,13 @@ the FWHM.
 
 This option may also be used to create a 3D kernel.
 To do that, two small modifications are necessary: add a @code{-3d} (or 
@code{-3D}) to the profile name (for example @code{moffat-3d}) and add a number 
(axis-ratio along the third dimension) to the end of the parameters for all 
profiles except @code{point}.
-The main reason behind providing an axis ratio in the third dimension is that 
in 3D astronomical datasets, commonly the third dimension doesn't have the same 
nature (units/sampling) as the first and second.
+The main reason behind providing an axis ratio in the third dimension is that 
in 3D astronomical datasets, commonly the third dimension does not have the 
same nature (units/sampling) as the first and second.
 
 For example in IFU (optical) or Radio datacubes, the first and second 
dimensions are commonly spatial/angular positions (like RA and Dec) but the 
third dimension is wavelength or frequency (in units of Angstroms for Herz).
 Because of this different nature (which also affects the processing), it may 
be necessary for the kernel to have a different extent in that direction.
 
 If the 3rd dimension axis ratio is equal to @mymath{1.0}, then the kernel will 
be a spheroid.
-If its smaller than @mymath{1.0}, the kernel will be button-shaped: extended 
less in the third dimension.
+If it is smaller than @mymath{1.0}, the kernel will be button-shaped: extended 
less in the third dimension.
 However, when it islarger than @mymath{1.0}, the kernel will be bullet-shaped: 
extended more in the third dimension.
 In the latter case, the radial parameter will correspond to the length along 
the 3rd dimension.
 For example, let's have a look at the two examples above but in 3D:
@@ -25107,7 +25100,7 @@ A spherical Gaussian kernel with FWHM of 2 pixels and 
truncated at 3 times
 the FWHM.
 @end table
 
-Ofcourse, if a specific kernel is needed that doesn't fit the constraints 
imposed by this option, you can always use a catalog to define any arbitrary 
kernel.
+Ofcourse, if a specific kernel is needed that does not fit the constraints 
imposed by this option, you can always use a catalog to define any arbitrary 
kernel.
 Just call the @option{--individual} and @option{--nomerged} options to make 
sure that it is built as a separate file (individually) and no ``merged'' image 
of the input profiles is created.
 
 @item -x INT,INT
@@ -25153,7 +25146,7 @@ This is due to the FITS definition of a real number 
position: The center of a pi
 
 @item -m
 @itemx --nomerged
-Don't make a merged image.
+Do Not make a merged image.
 By default after making the profiles, they are added to a final image with 
side lengths specified by @option{--mergedsize} if they overlap with it.
 
 @end table
@@ -25171,7 +25164,7 @@ You can see the FITS headers with Gnuastro's @ref{Fits} 
program using a command
 
 If the values given to any of these options does not correspond to the number 
of dimensions in the output dataset, then no WCS information will be added.
 Also recall that if you use the @option{--background} option, all of these 
options are ignored.
-Such that if the image given to @option{--background} doesn't have any WCS, 
the output of MakeProfiles will also not have any WCS, even if these options 
are given@footnote{If you want to add profiles @emph{and} WCS over the 
background image (to produce your output), you need more than one command:
+Such that if the image given to @option{--background} does not have any WCS, 
the output of MakeProfiles will also not have any WCS, even if these options 
are given@footnote{If you want to add profiles @emph{and} WCS over the 
background image (to produce your output), you need more than one command:
 1. You should use @option{--mergedsize} in MakeProfiles to manually set the 
output number of pixels equal to your desired background image (so the 
background is zero).
 In this mode, you can use these WCS-related options to define the WCS.
 2. Then use Arithmetic to add the pixels of your mock image to the background 
(see @ref{Arithmetic}.}.
@@ -25198,12 +25191,12 @@ The PC matrix of the WCS rotation, see the FITS 
standard (link above) to better
 @item --cunit=STR,STR
 The units of each WCS axis, for example @code{deg}.
 Note that these values are part of the FITS standard (link above).
-MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
+MakeProfiles will not complain if you use non-standard values, but later usage 
of them might cause trouble.
 
 @item --ctype=STR,STR
 The type of each WCS axis, for example @code{RA---TAN} and @code{DEC--TAN}.
 Note that these values are part of the FITS standard (link above).
-MakeProfiles won't complain if you use non-standard values, but later usage of 
them might cause trouble.
+MakeProfiles will not complain if you use non-standard values, but later usage 
of them might cause trouble.
 
 @end table
 
@@ -25268,7 +25261,7 @@ Deep astronomical images, like those used in 
extragalactic studies, seriously su
 Generally speaking, the sources of noise in an astronomical image are photon 
counting noise and Instrumental noise which are discussed in @ref{Photon 
counting noise} and @ref{Instrumental noise}.
 This review finishes with @ref{Generating random numbers} which is a short 
introduction on how random numbers are generated.
 We will see that while software random number generators are not perfect, they 
allow us to obtain a reproducible series of random numbers through setting the 
random number generator function and seed value.
-Therefore in this section, we'll also discuss how you can set these two 
parameters in Gnuastro's programs (including MakeNoise).
+Therefore in this section, we will also discuss how you can set these two 
parameters in Gnuastro's programs (including MakeNoise).
 
 @menu
 * Photon counting noise::       Poisson noise
@@ -25289,7 +25282,7 @@ With the very accurate electronics used in today's 
detectors, photon counting no
 To understand this noise (error in counting) and its effect on the images of 
astronomical targets, let's start by reviewing how a distribution produced by 
counting can be modeled as a parametric function.
 
 Counting is an inherently discrete operation, which can only produce positive 
integer outputs (including zero).
-For example we can't count @mymath{3.2} or @mymath{-2} of anything.
+For example we cannot count @mymath{3.2} or @mymath{-2} of anything.
 We only count @mymath{0}, @mymath{1}, @mymath{2}, @mymath{3} and so on.
 The distribution of values, as a result of counting efforts is formally known 
as the @url{https://en.wikipedia.org/wiki/Poisson_distribution, Poisson 
distribution}.
 It is associated to Sim@'eon Denis Poisson, because he discussed it while 
working on the number of wrongful convictions in court cases in his 1837 
book@footnote{[From Wikipedia] Poisson's result was also derived in a previous 
study by Abraham de Moivre in 1711.
@@ -25305,7 +25298,7 @@ The probability density function of getting @mymath{k} 
counts (in each attempt,
 
 @cindex Skewed Poisson distribution
 Because the Poisson distribution is only applicable to positive integer values 
(note the factorial operator, which only applies to non-negative integers), 
naturally it is very skewed when @mymath{\lambda} is near zero.
-One qualitative way to understand this behavior is that for smaller values 
near zero, there simply aren't enough integers smaller than the mean, than 
integers that are larger.
+One qualitative way to understand this behavior is that for smaller values 
near zero, there simply are not enough integers smaller than the mean, than 
integers that are larger.
 Therefore to accommodate all possibilities/counts, it has to be strongly 
skewed to the positive when the mean is small.
 For more on Skewness, see @ref{Skewness caused by signal and its measurement}.
 
@@ -25406,7 +25399,7 @@ As discussed above, to generate noise we need to make 
random samples of a partic
 So it is important to understand some general concepts regarding the 
generation of random numbers.
 For a very complete and nice introduction we strongly advise reading Donald 
Knuth's ``The art of computer programming'', volume 2, chapter 
3@footnote{Knuth, Donald. 1998.
 The art of computer programming. Addison--Wesley. ISBN 0-201-89684-2 }.
-Quoting from the GNU Scientific Library manual, ``If you don't own it, you 
should stop reading right now, run to the nearest bookstore, and buy 
it''@footnote{For students, running to the library might be more affordable!}!
+Quoting from the GNU Scientific Library manual, ``If you do not own it, you 
should stop reading right now, run to the nearest bookstore, and buy 
it''@footnote{For students, running to the library might be more affordable!}!
 
 @cindex Psuedo-random numbers
 @cindex Numbers, psuedo-random
@@ -25425,10 +25418,10 @@ Through the two environment variables 
@code{GSL_RNG_TYPE} and @code{GSL_RNG_SEED
 
 @cindex Seed, Random number generator
 @cindex Random number generator, Seed
-If you don't specify a value for @code{GSL_RNG_TYPE}, GSL will use its default 
random number generator type.
+If you do not specify a value for @code{GSL_RNG_TYPE}, GSL will use its 
default random number generator type.
 The default type is sufficient for most general applications.
 If no value is given for the @code{GSL_RNG_SEED} environment variable and you 
have asked Gnuastro to read the seed from the environment (through the 
@option{--envseed} option), then GSL will use the default value of each 
generator to give identical outputs.
-If you don't explicitly tell Gnuastro programs to read the seed value from the 
environment variable, then they will use the system time (accurate to within a 
microsecond) to generate (apparently random) seeds.
+If you do not explicitly tell Gnuastro programs to read the seed value from 
the environment variable, then they will use the system time (accurate to 
within a microsecond) to generate (apparently random) seeds.
 In this manner, every time you run the program, you will get a different 
random number distribution.
 
 There are two ways you can specify values for these environment variables.
@@ -25451,10 +25444,10 @@ $ export GSL_RNG_SEED=345
 @cindex @file{.bashrc}
 @noindent
 The subsequent programs which use GSL's random number generators will hence 
forth use these values in this session of the terminal you are running or while 
executing this script.
-In case you want to set fixed values for these parameters every time you use 
the GSL random number generator, you can add these two lines to your 
@file{.bashrc} startup script@footnote{Don't forget that if you are going to 
give your scripts (that use the GSL random number generator) to others you have 
to make sure you also tell them to set these environment variable separately.
+In case you want to set fixed values for these parameters every time you use 
the GSL random number generator, you can add these two lines to your 
@file{.bashrc} startup script@footnote{Do Not forget that if you are going to 
give your scripts (that use the GSL random number generator) to others you have 
to make sure you also tell them to set these environment variable separately.
 So for scripts, it is best to keep all such variable definitions within the 
script, even if they are within your @file{.bashrc}.}, see @ref{Installation 
directory}.
 
-@strong{IMPORTANT NOTE:} If the two environment variables @code{GSL_RNG_TYPE} 
and @code{GSL_RNG_SEED} are defined, GSL will report them by default, even if 
you don't use the @option{--envseed} option.
+@strong{IMPORTANT NOTE:} If the two environment variables @code{GSL_RNG_TYPE} 
and @code{GSL_RNG_SEED} are defined, GSL will report them by default, even if 
you do not use the @option{--envseed} option.
 For example, see this call to MakeProfiles:
 
 @example
@@ -25727,7 +25720,7 @@ more simpler and abstract form of
 
 @cindex Comoving distance
 Until now, we had assumed a static universe (not changing with time).
-But our observations so far appear to indicate that the universe is expanding 
(it isn't static).
+But our observations so far appear to indicate that the universe is expanding 
(it is not static).
 Since there is no reason to expect the observed expansion is unique to our 
particular position of the universe, we expect the universe to be expanding at 
all points with the same rate at the same time.
 Therefore, to add a time dependence to our distance measurements, we can 
include a multiplicative scaling factor, which is a function of time: 
@mymath{a(t)}.
 The functional form of @mymath{a(t)} comes from the cosmology, the physics we 
assume for it: general relativity, and the choice of whether the universe is 
uniform (`homogeneous') in density and curvature or inhomogeneous.
@@ -25735,7 +25728,7 @@ In this section, the functional form of @mymath{a(t)} 
is irrelevant, so we can a
 
 With this scaling factor, the proper distance will also depend on time.
 As the universe expands, the distance between two given points will shift to 
larger values.
-We thus define a distance measure, or coordinate, that is independent of time 
and thus doesn't `move'.
+We thus define a distance measure, or coordinate, that is independent of time 
and thus does not `move'.
 We call it the @emph{comoving distance} and display with @mymath{\chi} such 
that: @mymath{l(r,t)=\chi(r)a(t)}.
 We have therefore, shifted the @mymath{r} dependence of the proper distance we 
derived above for a static universe to the comoving distance:
 
@@ -26084,8 +26077,8 @@ You can get this list on the command-line with the 
@option{--listlines}.
 @node CosmicCalculator basic cosmology calculations, CosmicCalculator spectral 
line calculations, CosmicCalculator input options, Invoking astcosmiccal
 @subsubsection CosmicCalculator basic cosmology calculations
 By default, when no specific calculations are requested, CosmicCalculator will 
print a complete set of all its calculators (one line for each calculation, see 
@ref{Invoking astcosmiccal}).
-The full list of calculations can be useful when you don't want any specific 
value, but just a general view.
-In other contexts (for example in a batch script or during a discussion), you 
know exactly what you want and don't want to be distracted by all the extra 
information.
+The full list of calculations can be useful when you do not want any specific 
value, but just a general view.
+In other contexts (for example in a batch script or during a discussion), you 
know exactly what you want and do not want to be distracted by all the extra 
information.
 
 You can use any number of the options described below in any order.
 When any of these options are requested, CosmicCalculator's output will just 
be a single line with a single space between the (possibly) multiple values.
@@ -26215,7 +26208,7 @@ $ astcosmiccal --obsline=lyalpha,8000 --listlinesatz
 @end example
 
 Bellow you can see the printed/output calculations of CosmicCalculator that 
are related to spectral lines.
-Note that @option{--obsline} is an input parameter, so its discussed (with the 
full list of known lines) in @ref{CosmicCalculator input options}.
+Note that @option{--obsline} is an input parameter, so it is discussed (with 
the full list of known lines) in @ref{CosmicCalculator input options}.
 
 @table @option
 
@@ -26248,7 +26241,7 @@ The wavelength of the specified line at the redshift 
given to CosmicCalculator.
 The line can be specified either by its name or directly as a number (its 
wavelength).
 To get the list of pre-defined names for the lines and their wavelength, you 
can use the @option{--listlines} option, see @ref{CosmicCalculator input 
options}.
 In the former case (when a name is given), the returned number is in units of 
Angstroms.
-In the latter (when a number is given), the returned value is the same units 
of the input number (assuming its a wavelength).
+In the latter (when a number is given), the returned value is the same units 
of the input number (assuming it is a wavelength).
 
 @end table
 
@@ -26271,11 +26264,11 @@ To facilitate such higher-level data analysis, 
Gnuastro also installs some scrip
 @cindex Portable shell
 @cindex Shell, portable
 Like all of Gnuastro's source code, these scripts are also heavily commented.
-They are written in portable shell scripts (command-line environments), which 
doesn't need compilation.
-Therefore, if you open the installed scripts in a text editor, you can 
actually read them@footnote{Gnuastro's installed programs (those only starting 
with @code{ast}) aren't human-readable.
+They are written in portable shell scripts (command-line environments), which 
does not need compilation.
+Therefore, if you open the installed scripts in a text editor, you can 
actually read them@footnote{Gnuastro's installed programs (those only starting 
with @code{ast}) are not human-readable.
 They are written in C and need to be compiled before execution.
 Compilation optimizes the steps into the low-level hardware CPU 
instructions/language to improve efficiency.
-Because compiled programs don't need an interpreter like Bash on every run, 
they are much faster and more independent than scripts.
+Because compiled programs do not need an interpreter like Bash on every run, 
they are much faster and more independent than scripts.
 To read the source code of the programs, look into the @file{bin/progname} 
directory of Gnuastro's source (@ref{Downloading the source}).
 If you would like to read more about why C was chosen for the programs, please 
see @ref{Why C}.}.
 For example with this command (just replace @code{nano} with your favorite 
text editor, like @command{emacs} or @command{vim}):
@@ -26292,16 +26285,16 @@ These scripts also accept options and are in many 
ways similar to the programs (
 
 @itemize
 @item
-Currently they don't accept configuration files themselves.
+Currently they do not accept configuration files themselves.
 However, the configuration files of the Gnuastro programs they call are indeed 
parsed and used by those programs.
 
-As a result, they don't have the following options: @option{--checkconfig}, 
@option{--config}, @option{--lastconfig}, @option{--onlyversion}, 
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
+As a result, they do not have the following options: @option{--checkconfig}, 
@option{--config}, @option{--lastconfig}, @option{--onlyversion}, 
@option{--printparams}, @option{--setdirconf} and @option{--setusrconf}.
 
 @item
-They don't directly allocate any memory, so there is no @option{--minmapsize}.
+They do not directly allocate any memory, so there is no @option{--minmapsize}.
 
 @item
-They don't have an independent @option{--usage} option: when called with 
@option{--usage}, they just recommend running @option{--help}.
+They do not have an independent @option{--usage} option: when called with 
@option{--usage}, they just recommend running @option{--help}.
 
 @item
 The output of @option{--help} is not configurable like the programs (see 
@ref{--help}).
@@ -26384,7 +26377,7 @@ The output of AWK is then stored in the @code{modepref} 
shell variable which you
 
 With the solution above, the increment of the file counter for each night will 
be independent of the mode.
 If you want the counter to be mode-dependent, you can add a different counter 
for each mode and use that counter instead of the generic counter for each 
night (based on the value of @code{modepref}).
-But we'll leave the implementation of this step to you as an exercise.
+But we will leave the implementation of this step to you as an exercise.
 
 @end itemize
 
@@ -26499,7 +26492,7 @@ This option cannot be used with @option{--link}.
 @itemx --prefix=STR
 Prefix to append before the night-identifier of each newly created link or 
copy.
 This option is thus only relevant with the @option{--copy} or @option{--link} 
options.
-See the description of @option{--link} for how its used.
+See the description of @option{--link} for how it is used.
 For example, with @option{--prefix=img-}, all the created file names in the 
current directory will start with @code{img-}, making outputs like 
@file{img-n1-1.fits} or @file{img-n3-42.fits}.
 
 @option{--prefix} can also be used to store the links/copies in another 
directory relative to the directory this script is being run (it must already 
exist).
@@ -26618,7 +26611,7 @@ The output radial profile is a table (FITS or 
plain-text) containing the radial
 To measure the radial profile, this script needs to generate temporary files.
 All these temporary files will be created within the directory given to the 
@option{--tmpdir} option.
 When @option{--tmpdir} is not called, a temporary directory (with a name based 
on the inputs) will be created in the running directory.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 After the output is created, this script will delete the directory by default, 
unless you call the @option{--keeptmp} option.
 
 With the default options, the script will generate a circular radial profile 
using the mean value and centered at the center of the image.
@@ -26633,7 +26626,7 @@ To do this, use @option{--keeptmp} to keep the 
temporary files, and compare @fil
 
 @cartouche
 @noindent
-@strong{Finding properties of your elliptical target: } you want to measure 
the radial profile of a galaxy, but don't know its exact location, position 
angle or axis ratio.
+@strong{Finding properties of your elliptical target: } you want to measure 
the radial profile of a galaxy, but do not know its exact location, position 
angle or axis ratio.
 To obtain these values, you can use @ref{NoiseChisel} to detect signal in the 
image, feed it to @ref{Segment} to do basic segmentation, then use 
@ref{MakeCatalog} to measure the center (@option{--x} and @option{--y} in 
MakeCatalog), axis ratio (@option{--axisratio}) and position angle 
(@option{--positionangle}).
 @end cartouche
 
@@ -26643,7 +26636,7 @@ To obtain these values, you can use @ref{NoiseChisel} 
to detect signal in the im
 A crude solution is to use sigma-clipped measurements for the profile.
 However, sigma-clipped measurements can easily be biased when the number of 
sources at each radial distance increases at larger distances.
 Therefore a robust solution is to mask all other detections within the image.
-You can use @ref{NoiseChisel} and @ref{Segment} to detect and segment the 
sources, then set all pixels that don't belong to your target to blank using 
@ref{Arithmetic} (in particular, its @code{where} operator).
+You can use @ref{NoiseChisel} and @ref{Segment} to detect and segment the 
sources, then set all pixels that do not belong to your target to blank using 
@ref{Arithmetic} (in particular, its @code{where} operator).
 @end cartouche
 
 @table @option
@@ -26679,7 +26672,7 @@ By default, the radial profile will be computed up to a 
radial distance equal to
 @item -Q FLT
 @itemx --axisratio=FLT
 The axis ratio of the apertures (minor axis divided by the major axis in a 2D 
ellipse).
-By default (when this option isn't given), the radial profile will be circular 
(axis ratio of 1).
+By default (when this option is not given), the radial profile will be 
circular (axis ratio of 1).
 This parameter is used as the option @option{--qcol} in the generation of the 
apertures with @command{astmkprof}.
 
 @item -p FLT
@@ -26730,14 +26723,14 @@ For other measurements like the magnitude or surface 
brightness, MakeCatalog wil
 
 For example, by setting @option{--measure=mean,sigclip-mean --measure=median}, 
the mean, sigma-clipped mean and median values will be computed.
 The output radial profile will have 4 columns in this order: radial distance, 
mean, sigma-clipped and median.
-By default (when this option isn't given), the mean of all pixels at each 
radial position will be computed.
+By default (when this option is not given), the mean of all pixels at each 
radial position will be computed.
 
 @item -s FLT,FLT
 @itemx --sigmaclip=FLT,FLT
 Sigma clipping parameters: only relevant if sigma-clipping operators are 
requested by @option{--measure}.
 For more on sigma-clipping, see @ref{Sigma clipping}.
 If given, the value to this option is directly passed to the 
@option{--sigmaclip} option of @ref{MakeCatalog}, see @ref{MakeCatalog inputs 
and basic settings}.
-By default (when this option isn't given), the default values within 
MakeCatalog will be used.
+By default (when this option is not given), the default values within 
MakeCatalog will be used.
 To see the default value of this option in MakeCatalog, you can run this 
command:
 
 @example
@@ -26773,7 +26766,7 @@ This option is good to measure over a larger number of 
pixels to improve the mea
 @itemx --instd=FLT/STR
 Sky standard deviation as a single number (FLT) or as the filename (STR) 
containing the image with the std value for each pixel (the HDU within the file 
should be given to the @option{--stdhdu} option mentioned below).
 This is only necessary when the requested measurement (value given to 
@option{--measure}) by MakeCatalog needs the Standard deviation (for example 
the signal-to-noise ratio or magnitude error).
-If your measurements don't require a standard deviation, it is best to ignore 
this option (because it will slow down the script).
+If your measurements do not require a standard deviation, it is best to ignore 
this option (because it will slow down the script).
 
 @item -d INT/STR
 @itemx --stdhdu=INT/STR
@@ -26784,14 +26777,14 @@ HDU/extension of the sky standard deviation image 
specified with @option{--instd
 Several intermediate files are necessary to obtain the radial profile.
 All of these temporal files are saved into a temporal directory.
 With this option, you can directly specify this directory.
-By default (when this option isn't called), it will be built in the running 
directory and given an input-based name.
-If the directory doesn't exist at run-time, this script will create it.
+By default (when this option is not called), it will be built in the running 
directory and given an input-based name.
+If the directory does not exist at run-time, this script will create it.
 Once the radial profile has been obtained, this directory is removed.
 You can disable the deletion of the temporary directory with the 
@option{--keeptmp} option.
 
 @item -k
 @itemx --keeptmp
-Don't delete the temporary directory (see description of @option{--tmpdir} 
above).
+Do Not delete the temporary directory (see description of @option{--tmpdir} 
above).
 This option is useful for debugging.
 For example, to check that the profiles generated for obtaining the radial 
profile have the desired center, shape and orientation.
 @end table
@@ -26862,7 +26855,7 @@ Only the @option{--column} option is mandatory (to 
specify the input table colum
 You can optionally also specify the region's radius, width and color of the 
regions with the @option{--radius}, @option{--width} and @option{--color} 
options, otherwise default values will be used for these (described under each 
option).
 
 The created region file will be written into the file name given to 
@option{--output}.
-When @option{--output} isn't called, the default name of @file{ds9.reg} will 
be used (in the running directory).
+When @option{--output} is not called, the default name of @file{ds9.reg} will 
be used (in the running directory).
 If the file exists before calling this script, it will be overwritten, unless 
you pass the @option{--dontdelete} option.
 Optionally you can also use the @option{--command} option to give the full 
command that should be run to execute SAO DS9 (see example above and 
description below).
 In this mode, the created region file will be deleted once DS9 is closed 
(unless you pass the @option{--dontdelete} option).
@@ -26894,7 +26887,7 @@ The coordinate system of the positional columns (can be 
either @option{--mode=wc
 In the WCS mode, the values within the columns are interpreted to be RA and 
Dec.
 In the image mode, they are interpreted to be pixel X and Y positions.
 This option also affects the interpretation of the value given to 
@option{--radius}.
-When this option isn't explicitly given, the columns are assumed to be in WCS 
mode.
+When this option is not explicitly given, the columns are assumed to be in WCS 
mode.
 
 @item -C STR
 @itemx --color=STR
@@ -26916,9 +26909,9 @@ In WCS mode, the radius is assumed to be in 
arc-seconds, in image mode, it is in
 If this option is not explicitly given, in WCS mode the default radius is 1 
arc-seconds and in image mode it is 3 pixels.
 
 @item --dontdelete
-If the output file name exists, abort the program and don't over-write the 
contents of the file.
+If the output file name exists, abort the program and do not over-write the 
contents of the file.
 This option is thus good if you want to avoid accidentally writing over an 
important file.
-Also, don't delete the created region file when @option{--command} is given 
(by default, when @option{--command} is given, the created region file will be 
deleted after SAO DS9 closes).
+Also, do not delete the created region file when @option{--command} is given 
(by default, when @option{--command} is given, the created region file will be 
deleted after SAO DS9 closes).
 
 @item -o STR
 @itemx --output=STR
@@ -27041,7 +27034,7 @@ To activate this feature take the following steps:
 @enumerate
 @item
 Run the following command, while replacing @code{PREFIX}.
-If you don't know what to put in @code{PREFIX}, run @command{which astfits} on 
the command-line, and extract @code{PREFIX} from the output (the string before 
@file{/bin/astfits}).
+If you do not know what to put in @code{PREFIX}, run @command{which astfits} 
on the command-line, and extract @code{PREFIX} from the output (the string 
before @file{/bin/astfits}).
 For more, see @ref{Installation directory}.
 @example
 ln -sf PREFIX/share/gnuastro/astscript-fits-view.desktop \
@@ -27067,14 +27060,14 @@ The value can be the HDU name or number (the first 
HDU is counted from 0).
 @itemx --prefix=STR
 Directory to search for SAO DS9 or TOPCAT's executables (assumed to be 
@command{ds9} and @command{topcat}).
 If not called they will be assumed to be present in your @file{PATH} (see 
@ref{Installation directory}).
-If you don't have them already installed, their installation directories are 
available in @ref{SAO DS9} and @ref{TOPCAT} (they can be installed in 
non-system-wide locations that don't require administrator/root permissions).
+If you do not have them already installed, their installation directories are 
available in @ref{SAO DS9} and @ref{TOPCAT} (they can be installed in 
non-system-wide locations that do not require administrator/root permissions).
 
 @item -s STR
 @itemx --ds9scale=STR
 The string to give to DS9's @option{-scale} option.
 You can use this option to use a different scaling.
 The Fits-view script will place @option{-scale} before your given string when 
calling DS9.
-If you don't call this option, the default behavior is to cal DS9 with: 
@option{-scale mode zscale} or @option{--ds9scale="mode zscale"} when using 
this script.
+If you do not call this option, the default behavior is to cal DS9 with: 
@option{-scale mode zscale} or @option{--ds9scale="mode zscale"} when using 
this script.
 
 The Fits-view script has the following aliases to simplify the calling of this 
option (and avoid the double-quotations and @code{mode} in the example above):
 
@@ -27102,7 +27095,7 @@ The initial DS9 window geometry (value to DS9's 
@option{-geometry} option).
 
 @item -m
 @itemx --ds9colorbarmulti
-Don't show a single color bar for all the loaded images.
+Do Not show a single color bar for all the loaded images.
 By default this script will call DS9 in a way that a single color bar is shown 
for any number of images.
 A single color bar is preferred for two reasons: 1) when there are a lot of 
images, they consume a large fraction of the display area. 2) the color-bars 
are locked by this script, so there is no difference between!
 With this option, you can have separate color bars under each image.
@@ -27158,9 +27151,9 @@ Predicting all the possible optimizations for all the 
possible usage scenarios w
 Therefore, following the modularity principle of software engineering, after 
several years of working on this, we have broken the full job into the smallest 
number of independent steps as separate scripts.
 All scripts are independent of each other, meaning this that you are free to 
use all of them as you wish (for example only some of them, using another 
program for a certain step, using them for other purposes, or running 
independent parts in parallel).
 
-For constructing the PSF from your dataset, the first step is to obtain a 
catalog of stars within it (you can't use galaxies to build the PSF!).
-But you can't blindly use all the stars either!
-For example we don't want contamination from other bright, and nearby objects.
+For constructing the PSF from your dataset, the first step is to obtain a 
catalog of stars within it (you cannot use galaxies to build the PSF!).
+But you cannot blindly use all the stars either!
+For example we do not want contamination from other bright, and nearby objects.
 The first script below is therefore designed for selecting only good star 
candidates in your image.
 It will use different criteria, for example good parallax (where available, to 
avoid confusion with galaxies), not being near to bright stars, axis ratio, etc.
 For more on this script, see @ref{Invoking astscript-psf-select-stars}.
@@ -27196,7 +27189,7 @@ For more on the scaling and positioning script, see 
@ref{Invoking astscript-psf-
 
 As mentioned above, in the following sections, each script has its own 
documentation and list of options for very detailed customization (if 
necessary).
 But if you are new to these scripts, before continuing, we recommend that you 
do the tutorial @ref{Building the extended PSF}.
-Just don't forget to run every command, and try to tweak its steps based on 
the logic to nicely understand it.
+Just do not forget to run every command, and try to tweak its steps based on 
the logic to nicely understand it.
 
 
 
@@ -27307,7 +27300,7 @@ For fainter stars (when constructing the center of the 
PSF), you should decrease
 @itemx --brightmag=INT
 The brightest star magnitude to avoid (should be brighter than the brightest 
of @option{--magnituderange}).
 The basic idea is this: if a user asks for stars with magnitude 6 to 10 and 
one of those stars is near a magnitude 3 star, that star (with a magnitude of 6 
to 10) should be rejected because it is contaminated.
-But since the catalog is constrained to stars of magnitudes 6-10, the star 
with magnitude 3 is not present and can not be compared with!
+But since the catalog is constrained to stars of magnitudes 6-10, the star 
with magnitude 3 is not present and cannot be compared with!
 Therefore, when considering proximity to nearby stars, it is important to use 
a larger magnitude range than the user's requested magnitude range for good 
stars.
 The acceptable proximity is defined by @option{--mindistdeg}.
 
@@ -27335,7 +27328,7 @@ Recall that the axis ratio is only used when you also 
give a segmented image wit
 @item -t
 @itemx --tmpdir
 Directory to keep temporary files during the execution of the script.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 By default, upon completion of the script, this directory will be deleted.
 However, if you would like to keep the intermediate files, you can use the 
@option{--keeptmp} option.
 
@@ -27409,7 +27402,7 @@ If no normalization ring is considered, the output will 
not be normalized.
 
 In the following cases, this script will produce a fully NaN-valued stamp (of 
the size given to @option{--widthinpix}).
 A fully NaN image can safely be used with the stacking operators of Arithmetic 
(see @ref{Stacking operators}) because they will be ignored.
-In case you don't want to waste storage with fully NaN images, you can 
compress them with @code{gzip --best output.fits}, and give the resulting 
@file{.fits.gz} file to Arithmetic.
+In case you do not want to waste storage with fully NaN images, you can 
compress them with @code{gzip --best output.fits}, and give the resulting 
@file{.fits.gz} file to Arithmetic.
 @itemize
 @item
 The requested box (center coordinate with desired width) is not within the 
input image at all.
@@ -27450,7 +27443,7 @@ But for the outer parts it is not too effective, so to 
avoid wasting time for th
 
 @item -d
 @itemx --nocentering
-Don't do the sub-pixel centering to a new pixel grid.
+Do Not do the sub-pixel centering to a new pixel grid.
 See the description of the @option{--center} option for more.
 
 @item -W INT
@@ -27507,7 +27500,7 @@ If no normalization is done, then the value is set to 
@code{1.0}.
 @item -Q FLT
 @itemx --axisratio=FLT
 The axis ratio of the radial profiles for computing the normalization value.
-By default (when this option isn't given), the radial profile will be circular 
(axis ratio of 1).
+By default (when this option is not given), the radial profile will be 
circular (axis ratio of 1).
 This parameter is used directly in the @file{astscript-radial-profile} script.
 
 @item -p FLT
@@ -27524,7 +27517,7 @@ For more on sigma-clipping, see @ref{Sigma clipping}.
 @item -t
 @itemx --tmpdir
 Directory to keep temporary files during the execution of the script.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 By default, upon completion of the script, this directory will be deleted.
 However, if you would like to keep the intermediate files, you can use the 
@option{--keeptmp} option.
 
@@ -27631,7 +27624,7 @@ Similar to @option{--axisratio} (see above).
 @item -t
 @itemx --tmpdir
 Directory to keep temporary files during the execution of the script.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 By default, upon completion of the script, this directory will be deleted.
 However, if you would like to keep the intermediate files, you can use the 
@option{--keeptmp} option.
 
@@ -27767,7 +27760,7 @@ For more on sigma-clipping, see @ref{Sigma clipping}.
 @item -t
 @itemx --tmpdir
 Directory to keep temporary files during the execution of the script.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 By default, upon completion of the script, this directory will be deleted.
 However, if you would like to keep the intermediate files, you can use the 
@option{--keeptmp} option.
 
@@ -27858,7 +27851,7 @@ The positions along each dimension must be separated by 
a comma (@key{,}).
 The number of values given to this option must be the same as the dimensions 
of the input dataset.
 The units of the coordinates are read based on the value to the 
@option{--mode} option, see above.
 
-If the central position doesn't fall in the center of a pixel in the input 
image, the PSF is re-sampled with sub-pixel change in the pixel grid before 
subtraction.
+If the central position does not fall in the center of a pixel in the input 
image, the PSF is re-sampled with sub-pixel change in the pixel grid before 
subtraction.
 
 @item -s FLT
 @itemx --scale=FLT
@@ -27870,7 +27863,7 @@ For a full tutorial on using the 
@command{astscript-psf-*} scripts together, see
 @item -t
 @itemx --tmpdir
 Directory to keep temporary files during the execution of the script.
-If the directory doesn't exist at run-time, this script will create it.
+If the directory does not exist at run-time, this script will create it.
 By default, upon completion of the script, this directory will be deleted.
 However, if you would like to keep the intermediate files, you can use the 
@option{--keeptmp} option.
 
@@ -27881,7 +27874,7 @@ This option is useful for debugging and checking the 
outputs of internal steps.
 
 @item -m
 @itemx --modelonly
-Don't make the subtraction of the modeled star by the PSF.
+Do Not make the subtraction of the modeled star by the PSF.
 This option is useful when the user wants to obtain the scattered light field 
given by the PSF modeled star.
 @end table
 
@@ -27905,7 +27898,7 @@ For the details of this feature from GNU Make's own 
manual, see its @url{https:/
 Through this feature, Gnuastro provides additional Make functions that are 
useful in the context of data analysis.
 
 To use this feature, Gnuastro has to be built in shared library more.
-Gnuastro's Make extensions won't work if you build Gnuastro without shared 
libraries (for example when you configure Gnuastro with 
@option{--disable-shared} or @option{--debug}).
+Gnuastro's Make extensions will not work if you build Gnuastro without shared 
libraries (for example when you configure Gnuastro with 
@option{--disable-shared} or @option{--debug}).
 
 @menu
 * Loading the Gnuastro Make functions::  How to find and load Gnuastro's Make 
library.
@@ -27927,14 +27920,14 @@ load /PATH/TO/lib/libgnuastro_make.so
 Here are the possible replacements of the @file{/PATH/TO} component:
 @table @file
 @item /usr/local
-If you installed Gnuastro from source and didn't use the @option{--prefix} 
option at configuration time, you should use this base directory.
+If you installed Gnuastro from source and did not use the @option{--prefix} 
option at configuration time, you should use this base directory.
 @item /usr/
 If you installed Gnuastro through your operating system's package manager, it 
is highly likely that Gnuastro's library is here.
 @item ~/.local
 If you installed Gnuastro from source, but used @option{--prefix} to install 
Gnuastro in your home directory (as described in @ref{Installation directory}).
 @end table
 
-If you can't find @file{libgnuastro_make.so} in the locations above, the 
command below should give you its location.
+If you cannot find @file{libgnuastro_make.so} in the locations above, the 
command below should give you its location.
 It assumes that the libraries are in the same base directory as the programs 
(which is usually the case).
 
 @example
@@ -27954,7 +27947,7 @@ The Make functions in Gnuastro have been recently added 
(in August 2022), and wi
 @noindent
 @strong{Difference between `@code{=}' or `@code{:=}' for variable definition} 
When you define a variable with @code{=}, its value is expanded only when used, 
not when defined.
 However, when you use @code{:=}, it is immediately expanded.
-If your value doesn't contain functions, there is no difference between the 
two.
+If your value does not contain functions, there is no difference between the 
two.
 However, if you have functions in the value of a variable, it is better to 
expand the value immediately with @code{:=}.
 Otherwise, each time the value is used, the function is expaned (possibly may 
times) and this will reduce the speed of your pipeline.
 For more, see the 
@url{https://www.gnu.org/software/make/manual/html_node/Flavors.html, The two 
flavors of variables} in the GNU Make manual.
@@ -27989,7 +27982,7 @@ files := $(wildcard /datasets/images/*.fits)
 keyvalues := $(ast-fits-unique-keyvalues $(files), 1, FOO)
 @end example
 
-This is useful when you don't know the full range of values a-priori.
+This is useful when you do not know the full range of values a-priori.
 For example, let's assume that you are looking at a night's observations with 
a telescope and the purpose of the FITS image is written in the @code{OBJECT} 
keyword of the image (which we can assume is in HDU number 1).
 This keyword can have the name of the various science targets (for example 
@code{NGC123} and @code{M31}) and calibration targets (for example @code{BIAS} 
and @code{FLAT}).
 The list of science targets is different from project to project, such that in 
one night, you can observe multiple projects.
@@ -28029,8 +28022,8 @@ By communication, we mean that control is directly 
passed to a program from the
 For programs written in C and C++, the unique @code{main} function is in 
charge of this communication.
 
 Similar to a program, a library is also a collection of functions that is 
compiled into one executable file.
-However, unlike programs, libraries don't have a @code{main} function.
-Therefore they can't communicate directly with the outside world.
+However, unlike programs, libraries do not have a @code{main} function.
+Therefore they cannot communicate directly with the outside world.
 This gives you the chance to write your own @code{main} function and call 
library functions from within it.
 After compiling your program into a binary executable, you just have to 
@emph{link} it to the library and you are ready to run (execute) your program.
 In this way, you can use Gnuastro at a much lower-level, and in combination 
with other libraries on your system, you can significantly boost your 
creativity.
@@ -28104,21 +28097,21 @@ sum(double a, double b);
 
 @noindent
 You can think of a function's declaration as a building's address in the city, 
and the definition as the building's complete blueprints.
-When the compiler confronts a call to a function during its processing, it 
doesn't need to know anything about how the inputs are processed to generate 
the output.
-Just as the postman doesn't need to know the inner structure of a building 
when delivering the mail.
+When the compiler confronts a call to a function during its processing, it 
does not need to know anything about how the inputs are processed to generate 
the output.
+Just as the postman does not need to know the inner structure of a building 
when delivering the mail.
 The declaration (address) is enough.
-Therefore by @emph{declaring} the functions once at the start of the source 
files, we don't have to worry about @emph{defining} them after they are used.
+Therefore by @emph{declaring} the functions once at the start of the source 
files, we do not have to worry about @emph{defining} them after they are used.
 
 Even for a simple real-world operation (not a simple summation like above!), 
you will soon need many functions (for example, some for reading/preparing the 
inputs, some for the processing, and some for preparing the output).
 Although it is technically possible, managing all the necessary functions in 
one file is not easy and is contrary to the modularity principle (see 
@ref{Review of library fundamentals}), for example the functions for preparing 
the input can be usable in your other projects with a different processing.
-Therefore, as we will see later (in @ref{Linking}), the functions don't 
necessarily need to be defined in the source file where they are used.
+Therefore, as we will see later (in @ref{Linking}), the functions do not 
necessarily need to be defined in the source file where they are used.
 As long as their definitions are ultimately linked to the final executable, 
everything will be fine.
 For now, it is just important to remember that the functions that are called 
within one source file must be declared within the source file (declarations 
are mandatory), but not necessarily defined there.
 
 In the spirit of modularity, it is common to define contextually similar 
functions in one source file.
 For example, in Gnuastro, functions that calculate the median, mean and other 
statistical functions are defined in @file{lib/statistics.c}, while functions 
that deal directly with FITS files are defined in @file{lib/fits.c}.
 
-Keeping the definition of similar functions in a separate file greatly helps 
their management and modularity, but this fact alone doesn't make things much 
easier for the caller's source code: recall that while definitions are 
optional, declarations are mandatory.
+Keeping the definition of similar functions in a separate file greatly helps 
their management and modularity, but this fact alone does not make things much 
easier for the caller's source code: recall that while definitions are 
optional, declarations are mandatory.
 So if this was all, the caller would have to manually copy and paste 
(@emph{include}) all the declarations from the various source files into the 
file they are working on now.
 To address this problem, programmers have adopted the header file convention: 
the header file of a source code contains all the declarations that a caller 
would need to be able to use any of its functions.
 For example, in Gnuastro, @file{lib/statistics.c} (file containing function 
definitions) comes with @file{lib/gnuastro/statistics.h} (only containing 
function declarations).
@@ -28133,7 +28126,7 @@ So the header file also contains their definitions or 
declarations when they are
 @cindex Pre-processor macros
 Pre-processor macros (or macros for short) are replaced with their defined 
value by the pre-processor before compilation.
 Conventionally they are written only in capital letters to be easily 
recognized.
-It is just important to understand that the compiler doesn't see the macros, 
it sees their fixed values.
+It is just important to understand that the compiler does not see the macros, 
it sees their fixed values.
 So when a header specifies macros you can do your programming without worrying 
about the actual values.
 The standard C types (for example @code{int}, or @code{float}) are very 
low-level and basic.
 We can collect multiple C types into a @emph{structure} for a higher-level way 
to keep and pass-along data.
@@ -28144,7 +28137,7 @@ As the name suggests, the @emph{pre-processor} goes 
through the source code prio
 One of its jobs is to include, or merge, the contents of files that are 
mentioned with this directive in the source code.
 Therefore the compiler sees a single entity containing the contents of the 
main file and all the included files.
 This allows you to include many (sometimes thousands of) declarations into 
your code with only one line.
-Since the headers are also installed with the library into your system, you 
don't even need to keep a copy of them for each separate program, making things 
even more convenient.
+Since the headers are also installed with the library into your system, you do 
not even need to keep a copy of them for each separate program, making things 
even more convenient.
 
 Try opening some of the @file{.c} files in Gnuastro's @file{lib/} directory 
with a text editor to check out the include directives at the start of the file 
(after the copyright notice).
 Let's take @file{lib/fits.c} as an example.
@@ -28152,7 +28145,7 @@ You will notice that Gnuastro's header files (like 
@file{gnuastro/fits.h}) are i
 You will notice that files like @file{stdio.h}, or @file{string.h} are not in 
this directory (or anywhere within Gnuastro).
 
 On most systems the basic C header files (like @file{stdio.h} and 
@file{string.h} mentioned above) are located in 
@file{/usr/include/}@footnote{The @file{include/} directory name is taken from 
the pre-processor's @code{#include} directive, which is also the motivation 
behind the `I' in the @option{-I} option to the pre-processor.}.
-Your compiler is configured to automatically search that directory (and 
possibly others), so you don't have to explicitly mention these directories.
+Your compiler is configured to automatically search that directory (and 
possibly others), so you do not have to explicitly mention these directories.
 Go ahead, look into the @file{/usr/include} directory and find @file{stdio.h} 
for example.
 When the necessary header files are not in those specific libraries, the 
pre-processor can also search in places other than the current directory.
 You can specify those directories with this pre-processor option@footnote{Try 
running Gnuastro's @command{make} and find the directories given to the 
compiler with the @option{-I} option.}:
@@ -28165,8 +28158,8 @@ If the directory @file{DIR} is a standard system 
include directory, the option i
 Note that the space between @key{I} and the directory is optional and commonly 
not used.
 @end table
 
-If the pre-processor can't find the included files, it will abort with an 
error.
-In fact a common error when building programs that depend on a library is that 
the compiler doesn't not know where a library's header is (see @ref{Known 
issues}).
+If the pre-processor cannot find the included files, it will abort with an 
error.
+In fact a common error when building programs that depend on a library is that 
the compiler does not not know where a library's header is (see @ref{Known 
issues}).
 So you have to manually tell the compiler where to look for the library's 
headers with the @option{-I} option.
 For a small software with one or two source files, this can be done manually 
(see @ref{Summary and example on libraries}).
 However, to enhance modularity, Gnuastro (and most other bin/libraries) 
contain many source files, so the compiler is invoked many 
times@footnote{Nearly every command you see being executed after running 
@command{make} is one call to the compiler.}.
@@ -28197,7 +28190,7 @@ Not all libraries need to follow this convention, for 
example CFITSIO only has o
 @cindex GNU Libtool
 To enhance modularity, similar functions are defined in one source file (with 
a @file{.c} suffix, see @ref{Headers} for more).
 After running @command{make}, each human-readable, @file{.c} file is 
translated (or compiled) into a computer-readable ``object'' file (ending with 
@file{.o}).
-Note that object files are also created when building programs, they aren't 
particular to libraries.
+Note that object files are also created when building programs, they are not 
particular to libraries.
 Try opening Gnuastro's @file{lib/} and @file{bin/progname/} directories after 
running @command{make} to see these object files@footnote{Gnuastro uses GNU 
Libtool for portable library creation.
 Libtool will also make a @file{.lo} file for each @file{.c} file when building 
libraries (@file{.lo} files are human-readable).}.
 Afterwards, the object files are @emph{linked} together to create an 
executable program or a library.
@@ -28213,7 +28206,7 @@ $ nm bin/arithmetic/arithmetic.o
 
 @noindent
 This will print a list of all the functions (more generally, `symbols') that 
were called within @file{bin/arithmetic/arithmetic.c} along with some further 
information (for example a @code{T} in the second column shows that this 
function is actually defined here, @code{U} says that it is undefined here).
-Try opening the @file{.c} file to check some of these functions for your self. 
Run @command{info nm} for more information.
+Try opening the @file{.c} file to check some of these functions for yourself. 
Run @command{info nm} for more information.
 
 @cindex Linking
 To recap, the @emph{compiler} created the separate object files mentioned 
above for each @file{.c} file.
@@ -28228,7 +28221,7 @@ There are two ways to @emph{link} all the necessary 
symbols: static and dynamic/
 When the symbols (computer-readable function definitions in most cases) are 
copied into the output, it is called @emph{static} linking.
 When the symbols are kept in their original file and only a reference to them 
is kept in the executable, it is called @emph{dynamic}, or @emph{shared} 
linking.
 
-Let's have a closer look at the executable to understand this better: we'll 
assume you have built Gnuastro without any customization and installed Gnuastro 
into the default @file{/usr/local/} directory (see @ref{Installation 
directory}).
+Let's have a closer look at the executable to understand this better: we will 
assume you have built Gnuastro without any customization and installed Gnuastro 
into the default @file{/usr/local/} directory (see @ref{Installation 
directory}).
 If you tried the @command{nm} command on one of Arithmetic's object files 
above, then with the command below you can confirm that all the functions that 
were defined in the object file above (had a @code{T} in the second column) are 
also defined in the @file{astarithmetic} executable:
 
 @example
@@ -28269,7 +28262,7 @@ $ configure --disable-shared
 Try configuring Gnuastro with the command above, then build and install it (as 
described in @ref{Quick start}).
 Afterwards, check the @code{gal_} symbols in the installed Arithmetic 
executable like before.
 You will see that they are actually copied this time (have a @code{T} in the 
second column).
-If the second column doesn't convince you, look at the executable file size 
with the following command:
+If the second column does not convince you, look at the executable file size 
with the following command:
 
 @example
 $ ls -lh /usr/local/bin/astarithmetic
@@ -28280,7 +28273,7 @@ It should be around 4.2 Megabytes with this static 
linking.
 If you configure and build Gnuastro again with shared libraries enabled (which 
is the default), you will notice that it is roughly 100 Kilobytes!
 
 This huge difference would have been very significant in the old days, but 
with the roughly Terabyte storage drives commonly in use today, it is 
negligible.
-Fortunately, output file size is not the only benefit of dynamic linking: 
since it links to the libraries at run-time (rather than build-time), you don't 
have to re-build a higher-level program or library when an update comes for one 
of the lower-level libraries it depends on.
+Fortunately, output file size is not the only benefit of dynamic linking: 
since it links to the libraries at run-time (rather than build-time), you do 
not have to re-build a higher-level program or library when an update comes for 
one of the lower-level libraries it depends on.
 You just install the new low-level library and it will automatically be 
used/linked next time in the programs that use it.
 To be fair, this also creates a few complications@footnote{Both of these can 
be avoided by joining the mailing lists of the lower-level libraries and 
checking the changes in newer versions before installing them.
 Updates that result in such behaviors are generally heavily emphasized in the 
release notes.}:
@@ -28367,8 +28360,8 @@ To avoid such linking complexities when using 
Gnuastro's library, we recommend u
 @end table
 
 If you have compiled and linked your program with a dynamic library, then the 
dynamic linker also needs to know the location of the libraries after building 
the program: @emph{every time} the program is run afterwards.
-Therefore, it may happen that you don't get any errors when compiling/linking 
a program, but are unable to run your program because of a failure to find a 
library.
-This happens because the dynamic linker hasn't found the dynamic library 
@emph{at run time}.
+Therefore, it may happen that you do not get any errors when compiling/linking 
a program, but are unable to run your program because of a failure to find a 
library.
+This happens because the dynamic linker has not found the dynamic library 
@emph{at run time}.
 
 To find the dynamic libraries at run-time, the linker looks into the paths, or 
directories, in the @code{LD_LIBRARY_PATH} environment variable.
 For a discussion on environment variables, especially search paths like 
@code{LD_LIBRARY_PATH}, and how you can add new directories to them, see 
@ref{Installation directory}.
@@ -28378,7 +28371,7 @@ For a discussion on environment variables, especially 
search paths like @code{LD
 @node Summary and example on libraries,  , Linking, Review of library 
fundamentals
 @subsection Summary and example on libraries
 
-After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, 
we'll give a small tutorial here.
+After the mostly abstract discussions of @ref{Headers} and @ref{Linking}, we 
will give a small tutorial here.
 But before that, let's recall the general steps of how your source code is 
prepared, compiled and linked to the libraries it depends on so you can run it:
 
 @enumerate
@@ -28392,7 +28385,7 @@ The @strong{compiler} will translate (compile) the 
human-readable contents of ea
 @item
 The @strong{linker} will link the called function definitions from various 
compiled files to create one unified object.
 When the unified product has a @code{main} function, this function is the 
product's only entry point, enabling the operating system or user to directly 
interact with it, so the product is a program.
-When the product doesn't have a @code{main} function, the linker's product is 
a library and its exported functions can be linked to other executables (it has 
many entry points).
+When the product does not have a @code{main} function, the linker's product is 
a library and it is exported functions can be linked to other executables (it 
has many entry points).
 @end enumerate
 
 @cindex GCC: GNU Compiler Collection
@@ -28444,12 +28437,12 @@ This will allow you to easily compile, link and run 
programs that use Gnuastro's
 
 @cindex GNU Libtool
 BuildProgram uses GNU Libtool to find the necessary libraries to link against 
(GNU Libtool is the same program that builds all of Gnuastro's libraries and 
programs when you run @code{make}).
-So in the future, if Gnuastro's prerequisite libraries change or other 
libraries are added, you don't have to worry, you can just run BuildProgram and 
internal linking will be done correctly.
+So in the future, if Gnuastro's prerequisite libraries change or other 
libraries are added, you do not have to worry, you can just run BuildProgram 
and internal linking will be done correctly.
 
 @cartouche
 @noindent
-@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU 
Libtool, other implementations don't have some necessary features.
-If GNU Libtool isn't available at Gnuastro's configure time, you will get a 
notice at the end of the configuration step and BuildProgram will not be built 
or installed.
+@strong{BuildProgram requires GNU Libtool:} BuildProgram depends on GNU 
Libtool, other implementations do not have some necessary features.
+If GNU Libtool is not available at Gnuastro's configure time, you will get a 
notice at the end of the configuration step and BuildProgram will not be built 
or installed.
 Please see @ref{Optional dependencies} for more information.
 @end cartouche
 
@@ -28485,7 +28478,7 @@ $ astbuildprog myprogram.c image.fits
 ## Also look in other directories for headers and linking:
 $ astbuildprog -Lother -Iother/dir myprogram.c
 
-## Just build (compile and link) `myprogram.c', don't run it:
+## Just build (compile and link) `myprogram.c', do not run it:
 $ astbuildprog --onlybuild myprogram.c
 @end example
 
@@ -28503,7 +28496,7 @@ Note that they are placed after the values to the 
corresponding options @option{
 Therefore BuildProgram's own options take precedence.
 Using environment variables can be disabled with the @option{--noenv} option.
 Just note that BuildProgram also keeps the important flags in these 
environment variables in its configuration file.
-Therefore, in many cases, even though you may needed them to build Gnuastro, 
you won't need them in BuildProgram.
+Therefore, in many cases, even though you may needed them to build Gnuastro, 
you will not need them in BuildProgram.
 
 The first argument is considered to be the C source file that must be compiled 
and linked.
 Any other arguments (non-option tokens on the command-line) will be passed 
onto the program when BuildProgram wants to run it.
@@ -28523,7 +28516,7 @@ Once your program grows and you break it up into 
multiple files (which are much
 @cindex GNU Compiler Collection (GCC)
 C compiler to use for the compilation, if not given environment variables will 
be used as described in the next paragraph.
 If the compiler is in your systems's search path, you can simply give its 
name, for example @option{--cc=gcc}.
-If its not in your system's search path, you can give its full path, for 
example @option{--cc=/path/to/your/custom/cc}.
+If it is not in your system's search path, you can give its full path, for 
example @option{--cc=/path/to/your/custom/cc}.
 
 If this option has no value after parsing the command-line and all 
configuration files (see @ref{Configuration file precedence}), then 
BuildProgram will look into the following environment variables in the given 
order @code{CC} and @code{GCC}.
 If they are also not defined, BuildProgram will ultimately default to the 
@command{gcc} command which is present in many systems (sometimes as a link to 
other compilers).
@@ -28533,7 +28526,7 @@ If they are also not defined, BuildProgram will 
ultimately default to the @comma
 @cindex GNU CPP
 @cindex C Pre-Processor
 Directory to search for files that you @code{#include} in your C program.
-Note that headers relating to Gnuastro and its dependencies don't need this 
option.
+Note that headers relating to Gnuastro and its dependencies do not need this 
option.
 This is only necessary if you want to use other headers.
 It may be called multiple times and order matters.
 This directory will be searched before those of Gnuastro's build and also the 
system search directories.
@@ -28592,13 +28585,13 @@ Some of the most common values to this option are: 
@option{pedantic} (Warnings r
 @itemx --tag=STR
 The language configuration information.
 Libtool can build objects and libraries in many languages.
-In many cases, it can identify the language automatically, but when it doesn't 
you can use this option to explicitly notify Libtool of the language.
+In many cases, it can identify the language automatically, but when it does 
not you can use this option to explicitly notify Libtool of the language.
 The acceptable values are: @code{CC} for C, @code{CXX} for C++, @code{GCJ} for 
Java, @code{F77} for Fortran 77, @code{FC} for Fortran, @code{GO} for Go and 
@code{RC} for Windows Resource.
 Note that the Gnuastro library is not yet fully compatible with all these 
languages.
 
 @item -b
 @itemx --onlybuild
-Only build the program, don't run it.
+Only build the program, do not run it.
 By default, the built program is immediately run afterwards.
 
 @item -d
@@ -28621,7 +28614,7 @@ This option is useful when you prefer to use another 
Libtool control file.
 @cindex @code{GCC}
 @cindex @code{LDFLAGS}
 @cindex @code{CPPFLAGS}
-Don't use environment variables in the build, just use the values given to the 
options.
+Do Not use environment variables in the build, just use the values given to 
the options.
 As described above, environment variables like @code{CC}, @code{GCC}, 
@code{LDFLAGS},  @code{CPPFLAGS} will be read by default and used in the build 
if they have been defined.
 @end table
 
@@ -28663,7 +28656,7 @@ Since the 0.2 release, the libraries are install-able.
 Hence the libraries are currently under heavy development and will 
significantly evolve between releases and will become more mature and stable in 
due time.
 It will stabilize with the removal of this notice.
 Check the @file{NEWS} file for interface changes.
-If you use the Info version of this manual (see @ref{Info}), you don't have to 
worry: the documentation will correspond to your installed version.
+If you use the Info version of this manual (see @ref{Info}), you do not have 
to worry: the documentation will correspond to your installed version.
 @end cartouche
 
 @menu
@@ -28793,7 +28786,7 @@ Gnuastro also comes with some wrappers to make it 
easier to use libgit2 (see @re
 @subsection Multithreaded programming (@file{threads.h})
 
 @cindex Multithreaded programming
-In recent years, newer CPUs don't have significantly higher frequencies any
+In recent years, newer CPUs do not have significantly higher frequencies any
 more. However, CPUs are being manufactured with more cores, enabling more
 than one operation (thread) at each instant. This can be very useful to
 speed up many aspects of processing and in particular image processing.
@@ -28801,7 +28794,7 @@ speed up many aspects of processing and in particular 
image processing.
 Most of the programs in Gnuastro utilize multi-threaded programming for the
 CPU intensive processing steps. This can potentially lead to a significant
 decrease in the running time of a program, see @ref{A note on threads}. In
-terms of reading the code, you don't need to know anything about
+terms of reading the code, you do not need to know anything about
 multi-threaded programming. You can simply follow the case where only one
 thread is to be used. In these cases, threads are not used and can be
 completely ignored.
@@ -28809,16 +28802,16 @@ completely ignored.
 @cindex POSIX threads library
 @cindex Lawrence Livermore National Laboratory
 When the C language was defined (the K&R's book was written), using threads
-was not common, so C's threading capabilities aren't introduced
+was not common, so C's threading capabilities are not introduced
 there. Gnuastro uses POSIX threads for multi-threaded programming, defined
 in the @file{pthread.h} system wide header. There are various resources for
 learning to use POSIX threads. An excellent
 @url{https://computing.llnl.gov/tutorials/pthreads/, tutorial} is provided
 by the Lawrence Livermore National Laboratory, with abundant figures to
 better understand the concepts, it is a very good start.  The book
-`Advanced programming in the Unix environment'@footnote{Don't let the title
+`Advanced programming in the Unix environment'@footnote{Do Not let the title
 scare you! The two chapters on Multi-threaded programming are very
-self-sufficient and don't need any more knowledge than K&R.}, by Richard
+self-sufficient and do not need any more knowledge than K&R.}, by Richard
 Stevens and Stephen Rago, Addison-Wesley, 2013 (Third edition) also has two
 chapters explaining the POSIX thread constructs which can be very helpful.
 
@@ -28834,7 +28827,7 @@ 
introduction@footnote{@url{https://computing.llnl.gov/tutorials/parallel_comp/}}
 
 One very useful POSIX thread concept is
 @code{pthread_barrier}. Unfortunately, it is only an optional feature in
-the POSIX standard, so some operating systems don't include it. Therefore
+the POSIX standard, so some operating systems do not include it. Therefore
 in @ref{Implementation of pthread_barrier}, we introduce our own
 implementation. This is a rather technical section only necessary for more
 technical readers and you can safely ignore it. Following that, we describe
@@ -28843,7 +28836,7 @@ multi-threaded program, see @ref{Gnuastro's thread 
related functions} for
 more.
 
 @menu
-* Implementation of pthread_barrier::  Some systems don't have pthread_barrier
+* Implementation of pthread_barrier::  Some systems do not have pthread_barrier
 * Gnuastro's thread related functions::  Functions for managing threads.
 @end menu
 
@@ -28856,8 +28849,8 @@ One optional feature of the POSIX Threads standard is 
the
 that allows for independent threads to ``wait'' behind a ``barrier'' for
 the rest after they finish. Barriers can thus greatly simplify the code in
 a multi-threaded program, so they are heavily used in Gnuastro. However,
-since its an optional feature in the POSIX standard, some operating systems
-don't include it. So to make Gnuastro portable, we have written our own
+since it is an optional feature in the POSIX standard, some operating systems
+do not include it. So to make Gnuastro portable, we have written our own
 implementation of those @code{pthread_barrier} functions.
 
 At @command{./configure} time, Gnuastro will check if
@@ -28897,7 +28890,7 @@ finished.
 @strong{Destroy a barrier before re-using it:} It is very important to
 destroy the barrier before (possibly) reusing it. This destroy function not
 only destroys the internal structures, it also waits (in 1 microsecond
-intervals, so you will not notice!) until all the threads don't need the
+intervals, so you will not notice!) until all the threads do not need the
 barrier structure any more. If you immediately start spinning off new
 threads with a not-destroyed barrier, then the internal structure of the
 remaining threads will get mixed with the new ones and you will get very
@@ -28913,7 +28906,7 @@ The POSIX Threads functions offered in the C library 
are very low-level and offe
 So if you are interested in customizing your tools for complicated thread 
applications, it is strongly encouraged to get a nice familiarity with them.
 Some resources were introduced in @ref{Multithreaded programming}.
 
-However, in many cases used in astronomical data analysis, you don't need 
communication between threads and each target operation can be done 
independently.
+However, in many cases used in astronomical data analysis, you do not need 
communication between threads and each target operation can be done 
independently.
 Since such operations are very common, Gnuastro provides the tools below to 
facilitate the creation and management of jobs without any particular knowledge 
of POSIX Threads for such operations.
 The most interesting high-level functions of this section are the 
@code{gal_threads_number} and @code{gal_threads_spin_off} that identify the 
number of threads on the system and spin-off threads.
 You can see a demonstration of using these functions in @ref{Library demo - 
multi-threaded operation}.
@@ -28936,7 +28929,7 @@ struct gal_threads_params
 
 @deftypefun size_t gal_threads_number ()
 Return the number of threads that the operating system has available for your 
program.
-This number is usually fixed for a single machine and doesn't change.
+This number is usually fixed for a single machine and does not change.
 So this function is useful when you want to run your program on different 
machines (with different CPUs).
 @end deftypefun
 
@@ -28953,19 +28946,19 @@ For more on Gnuastro's memory management, see 
@ref{Memory management}.
 
 @deftypefun void gal_threads_attr_barrier_init (pthread_attr_t @code{*attr}, 
pthread_barrier_t @code{*b}, size_t @code{limit})
 @cindex Detached threads
-This is a low-level function in case you don't want to use 
@code{gal_threads_spin_off}.
+This is a low-level function in case you do not want to use 
@code{gal_threads_spin_off}.
 It will initialize the general thread attribute @code{attr} and the barrier 
@code{b} with @code{limit} threads to wait behind the barrier.
 For maximum efficiency, the threads initialized with this function will be 
detached.
-Therefore no communication is possible between these threads and in particular 
@code{pthread_join} won't work on these threads.
+Therefore no communication is possible between these threads and in particular 
@code{pthread_join} will not work on these threads.
 You have to use the barrier constructs to wait for all threads to finish.
 @end deftypefun
 
 @deftypefun {char *} gal_threads_dist_in_threads (size_t @code{numactions}, 
size_t @code{numthreads}, size_t @code{minmapsize}, int @code{quietmmap}, 
size_t @code{**indexs}, size_t @code{*icols})
-This is a low-level function in case you don't want to use 
@code{gal_threads_spin_off}.
+This is a low-level function in case you do not want to use 
@code{gal_threads_spin_off}.
 The job of this function is to distribute @code{numactions} jobs/actions in 
@code{numthreads} threads.
 To do this, it will assign each job an ID, ranging from 0 to 
@code{numactions}-1.
 The output is the allocated @code{*indexs} array and the @code{*icols} number.
-In memory, its just a simple 1D array that has @code{numthreads} 
@mymath{\times} @code{*icols} elements.
+In memory, it is just a simple 1D array that has @code{numthreads} 
@mymath{\times} @code{*icols} elements.
 But you can visualize it as a 2D array with @code{numthreads} rows and 
@code{*icols} columns.
 For more on the logic of the distribution, see below.
 
@@ -29007,7 +29000,7 @@ The most efficient way to manage the actions is such 
that only @mymath{T} thread
 This way your CPU will get all the actions done with minimal overhead.
 
 The purpose of this function is to do what we explained above: each row in the 
@code{indexs} array contains the indices of actions which must be done by one 
thread (so it has @code{numthreads} rows with @code{*icols} columns).
-However, when using @code{indexs}, you don't have to know the number of 
columns.
+However, when using @code{indexs}, you do not have to know the number of 
columns.
 It is guaranteed that all the rows finish with @code{GAL_BLANK_SIZE_T} (see 
@ref{Library blank values}).
 The @code{GAL_BLANK_SIZE_T} macro plays a role very similar to a string's 
@code{\0}: every row finishes with this macro, so can easily stop parsing the 
indexes in the row as soon as you confront @code{GAL_BLANK_SIZE_T}.
 For some real examples, please see the example program in 
@file{tests/lib/multithread.c} for a demonstration.
@@ -29036,7 +29029,7 @@ you can use C's @code{switch} statement.
 
 Since Gnuastro heavily deals with file input-output, the types it defines
 are fixed width types, these types are portable to all systems and are
-defined in the standard C header @file{stdint.h}. You don't need to include
+defined in the standard C header @file{stdint.h}. You do not need to include
 this header, it is included by any Gnuastro header that deals with the
 different types. However, the most commonly used types in a C (or C++)
 program (for example @code{int} or @code{long}) are not defined by their
@@ -29188,7 +29181,7 @@ data types}.
 
 @deftypefun void gal_type_min (uint8_t @code{type}, void @code{*in})
 Put the minimum possible value of @code{type} in the space pointed to by
-@code{in}. Since the value can have any type, this function doesn't return
+@code{in}. Since the value can have any type, this function does not return
 anything, it assumes the space for the given type is available to @code{in}
 and writes the value there. Here is one example
 @example
@@ -29204,7 +29197,7 @@ Note: Do not use the minimum value for a blank value of 
a general
 
 @deftypefun void gal_type_max (uint8_t @code{type}, void @code{*in})
 Put the maximum possible value of @code{type} in the space pointed to by
-@code{in}. Since the value can have any type, this function doesn't return
+@code{in}. Since the value can have any type, this function does not return
 anything, it assumes the space for the given type is available to @code{in}
 and writes the value there. Here is one example
 @example
@@ -29292,7 +29285,7 @@ float out;
 void *outptr=&out;
 if( gal_type_from_string(&outptr, string, GAL_TYPE_FLOAT32) )
   @{
-    fprintf(stderr, "%s couldn't be read as float32\n", string);
+    fprintf(stderr, "%s could not be read as float32\n", string);
     exit(EXIT_FAILURE);
   @}
 @end example
@@ -29303,12 +29296,12 @@ When you need to read many numbers into an array, 
@code{out} would be an array,
 
 @deftypefun {void *} gal_type_string_to_number (char @code{*string}, uint8_t 
@code{*type})
 Read @code{string} into smallest type that can host the number, the allocated 
space for the number will be returned and the type of the number will be put 
into the memory that @code{type} points to.
-If @code{string} couldn't be read as a number, this function will return 
@code{NULL}.
+If @code{string} could not be read as a number, this function will return 
@code{NULL}.
 
 This function first calls the C library's @code{strtod} function to read 
@code{string} as a double-precision floating point number.
 When successful, it will check the value to put it in the smallest numerical 
data type that can handle it: for example @code{120} and @code{50000} will be 
read as a signed 8-bit integer and unsigned 16-bit integer types.
 When reading as an integer, the C library's @code{strtol} function is used (in 
base-10) to parse the string again.
-This re-parsing as an integer is necessary because integers with many digits 
(for example the Unix epoch seconds) will not be accurately stored as a 
floating point and we can't use the result of @code{strtod}.
+This re-parsing as an integer is necessary because integers with many digits 
(for example the Unix epoch seconds) will not be accurately stored as a 
floating point and we cannot use the result of @code{strtod}.
 
 When @code{string} is successfully parsed as a number @emph{and} there is 
@code{.} in @code{string}, it will force the number into floating point types.
 For example @code{"5"} is read as an integer, while @code{"5."} or 
@code{"5.0"}, or @code{"5.00"} will be read as a floating point 
(single-precision).
@@ -29340,7 +29333,7 @@ main(void)
 @}
 @end example
 
-However, if we read 100000 as a signed 32-bit integer, there won't be any 
problem and the printed sentence will be logically correct (for someone who 
doesn't know anything about numeric data types: users of your programs).
+However, if we read 100000 as a signed 32-bit integer, there will not be any 
problem and the printed sentence will be logically correct (for someone who 
does not know anything about numeric data types: users of your programs).
 For the advantages of integers, see @ref{Integer benefits and pitfalls}.
 @end deftypefun
 
@@ -29365,7 +29358,7 @@ codes, see @ref{Library data types}.
 
 When working with the @code{array} elements of @code{gal_data_t}, we are
 actually dealing with @code{void *} pointers. However, pointer arithmetic
-doesn't apply to @code{void *}, because the system doesn't know how many
+does not apply to @code{void *}, because the system does not know how many
 bytes there are in each element to increment the pointer respectively. This
 function will use the given @code{type} to calculate where the incremented
 element is located in memory.
@@ -29390,7 +29383,7 @@ Otherwise, if you just want space for one string (for 
example 6 bytes for @code{
 When space cannot be allocated, this function will abort the program with a 
message containing the reason for the failure.
 @code{funcname} (name of the function calling this function) and 
@code{varname} (name of variable that needs this space) will be used in this 
error message if they are not @code{NULL}.
 In most modern compilers, you can use the generic @code{__func__} variable for 
@code{funcname}.
-In this way, you don't have to manually copy and paste the function name or 
worry about it changing later (@code{__func__} was standardized in C99).
+In this way, you do not have to manually copy and paste the function name or 
worry about it changing later (@code{__func__} was standardized in C99).
 @end deftypefun
 
 @deftypefun {void *} gal_pointer_allocate_ram_or_mmap (uint8_t @code{type}, 
size_t @code{size}, int @code{clear}, size_t @code{minmapsize}, char 
@code{**mmapname}, int @code{quietmmap}, const char @code{*funcname}, const 
char @code{*varname})
@@ -29407,12 +29400,12 @@ If @code{clear!=0}, then the allocated space will 
also be cleared.
 The allocation is done using C's @code{mmap} function.
 The name of the file containing the allocated space is an allocated string 
that will be put in @code{*mmapname}.
 
-Note that the kernel doesn't allow an infinite number of memory mappings to 
files.
+Note that the kernel does not allow an infinite number of memory mappings to 
files.
 So it is not recommended to use this function with every allocation.
 The best case scenario to use this function is for large arrays that are very 
large and can fill up the RAM.
 Keep the smaller arrays in RAM, which is faster and can have a (theoretically) 
unlimited number of allocations.
 
-When you are done with the dataset and don't need it anymore, don't use 
@code{free} (the dataset isn't in RAM).
+When you are done with the dataset and do not need it anymore, do not use 
@code{free} (the dataset is not in RAM).
 Just delete the file (and the allocated space for the filename) with the 
commands below, or simply use @code{gal_pointer_mmap_free}.
 
 @example
@@ -29430,7 +29423,7 @@ If @code{quietmmap} is non-zero, then a warning will be 
printed for the user to
 @node Library blank values, Library data container, Pointers, Gnuastro library
 @subsection Library blank values (@file{blank.h})
 When the position of an element in a dataset is important (for example a
-pixel in an image), a place-holder is necessary for the element if we don't
+pixel in an image), a place-holder is necessary for the element if we do not
 have a value to fill it with (for example the CCD cannot read those
 pixels). We cannot simply shift all the other pixels to fill in the one we
 have no value for. In other cases, it often occurs that the field of sky
@@ -29443,7 +29436,7 @@ holders in a dataset. They have no usable value but 
they have a position.
 Every type needs a corresponding blank value (see @ref{Numeric data types}
 and @ref{Library data types}). Floating point types have a unique value
 identified by IEEE known as Not-a-Number (or NaN) which is a unique value
-that is recognized by the compiler. However, integer and string types don't
+that is recognized by the compiler. However, integer and string types do not
 have any standard value. For integers, in Gnuastro we take an extremum of
 the given type: for signed types (that allow negatives), the minimum
 possible value is used as blank and for unsigned types (that only accept
@@ -29528,7 +29521,7 @@ value).
 @end deffn
 
 @deffn {Global integer}  GAL_BLANK_STRING
-Blank value for string types (this is itself a string, it isn't the
+Blank value for string types (this is itself a string, it is not the
 @code{NULL} pointer).
 @end deffn
 
@@ -29546,7 +29539,7 @@ Allocate the space required to keep the blank for the 
given data type @code{type
 
 @deftypefun void gal_blank_initialize (gal_data_t @code{*input})
 Initialize all the elements in the @code{input} dataset to the blank value 
that corresponds to its type.
-If @code{input} isn't a string, and is a tile over a larger dataset, only the 
region that the tile covers will be set to blank.
+If @code{input} is not a string, and is a tile over a larger dataset, only the 
region that the tile covers will be set to blank.
 For strings, the full dataset will be initialized.
 @end deftypefun
 
@@ -29573,7 +29566,7 @@ below).
 
 
 @deftypefun int gal_blank_present (gal_data_t @code{*input}, int 
@code{updateflag})
-Return 1 if the dataset has a blank value and zero if it doesn't. Before
+Return 1 if the dataset has a blank value and zero if it does not. Before
 checking the dataset, this function will look at @code{input}'s flags. If
 the @code{GAL_DATA_FLAG_BLANK_CH} bit of @code{input->flag} is on, this
 function will not do any check and will just use the information in the
@@ -29608,7 +29601,7 @@ on these flags. If @code{input==NULL}, then this 
function will return
 @deftypefun {gal_data_t *} gal_blank_flag (gal_data_t @code{*input})
 Create a dataset of the same size as the input, but with an
 @code{uint8_t} type that has a value of 1 for data that are blank and 0 for
-those that aren't.
+those that are not.
 @end deftypefun
 
 @deftypefun void gal_blank_flag_apply (gal_data_t @code{*input}, gal_data_t 
@code{*flag})
@@ -29618,7 +29611,7 @@ Set all non-zero and non-blank elements of @code{flag} 
to blank in @code{input}.
 
 @deftypefun void gal_blank_flag_remove (gal_data_t @code{*input}, gal_data_t 
@code{*flag})
 Remove all elements within @code{input} that are flagged, convert it to a 1D 
dataset and adjust the size properly (the number of non-flagged elements).
-In practice this function doesn't@code{realloc} the input array (see 
@code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just 
shifts the blank elements to the end and adjusts the size elements of the 
@code{gal_data_t}, see @ref{Generic data container}.
+In practice this function does not@code{realloc} the input array (see 
@code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just 
shifts the blank elements to the end and adjusts the size elements of the 
@code{gal_data_t}, see @ref{Generic data container}.
 
 Note that elements that are blank, but not flagged will not be removed.
 This function will only remove flagged elements.
@@ -29630,7 +29623,7 @@ This check is highly recommended because it will avoid 
strange bugs in later ste
 
 @deftypefun void gal_blank_remove (gal_data_t @code{*input})
 Remove blank elements from a dataset, convert it to a 1D dataset, adjust the 
size properly (the number of non-blank elements), and toggle the 
blank-value-related bit-flags.
-In practice this function doesn't@code{realloc} the input array (see 
@code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just 
shifts the blank elements to the end and adjusts the size elements of the 
@code{gal_data_t}, see @ref{Generic data container}.
+In practice this function does not@code{realloc} the input array (see 
@code{gal_blank_remove_realloc} for shrinking/re-allocating also), it just 
shifts the blank elements to the end and adjusts the size elements of the 
@code{gal_data_t}, see @ref{Generic data container}.
 
 If all the elements were blank, then @code{input->size} will be zero.
 This is thus a good parameter to check after calling this function to see if 
there actually were any non-blank elements in the input or not and take the 
appropriate measure.
@@ -29682,7 +29675,7 @@ types, units and higher-level structures), Gnuastro 
defines the
 @code{gal_data_t} type which is the input/output container of choice for
 many of Gnuastro library's functions. It is defined in
 @file{gnuastro/data.h}. If you will be using (`@code{# include}'ing) those
-libraries, you don't need to include this header explicitly, it is already
+libraries, you do not need to include this header explicitly, it is already
 included by any library header that uses @code{gal_data_t}.
 
 @deftp {Type (C @code{struct})} gal_data_t
@@ -29746,7 +29739,7 @@ points to (a pixel in an image for 
example)@footnote{Also see
 information can greatly help in compiler optimizations and thus the running
 time of the program. But older compilers might not have this capability, so
 at @command{./configure} time, Gnuastro checks this feature and if the
-user's compiler doesn't support @code{restrict}, it will be removed from
+user's compiler does not support @code{restrict}, it will be removed from
 this definition.
 
 @cindex Data type
@@ -29831,7 +29824,7 @@ The currently recognized bits are stored in these 
macros:
 @cindex Blank data
 @item GAL_DATA_FLAG_BLANK_CH
 Marking that the dataset has been checked for blank values or not.
-When a dataset doesn't have any blank values, the 
@code{GAL_DATA_FLAG_HASBLANK} bit will be zero.
+When a dataset does not have any blank values, the 
@code{GAL_DATA_FLAG_HASBLANK} bit will be zero.
 But upon initialization, all bits also get a value of zero.
 Therefore, a checker needs this flag to see if the value in 
@code{GAL_DATA_FLAG_HASBLANK} is reliable (dataset has actually been parsed for 
a blank value) or not.
 
@@ -29844,7 +29837,7 @@ actually parse the dataset, it will just use 
@code{GAL_DATA_FLAG_HASBLANK}.
 @item GAL_DATA_FLAG_HASBLANK
 This bit has a value of @code{1} when the given dataset has blank
 values. If this bit is @code{0} and @code{GAL_DATA_FLAG_BLANK_CH} is
-@code{1}, then the dataset has been checked and it didn't have any blank
+@code{1}, then the dataset has been checked and it did not have any blank
 values, so there is no more need for further checks.
 
 @item GAL_DATA_FLAG_SORT_CH
@@ -29856,13 +29849,13 @@ similar to that in @code{GAL_DATA_FLAG_BLANK_CH}.
 @item GAL_DATA_FLAG_SORTED_I
 This bit has a value of @code{1} when the given dataset is sorted in an
 increasing manner. If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH}
-is @code{1}, then the dataset has been checked and wasn't sorted
+is @code{1}, then the dataset has been checked and was not sorted
 (increasing), so there is no more need for further checks.
 
 @item GAL_DATA_FLAG_SORTED_D
 This bit has a value of @code{1} when the given dataset is sorted in a
 decreasing manner. If this bit is @code{0} and @code{GAL_DATA_FLAG_SORT_CH}
-is @code{1}, then the dataset has been checked and wasn't sorted
+is @code{1}, then the dataset has been checked and was not sorted
 (decreasing), so there is no more need for further checks.
 
 @end table
@@ -29953,7 +29946,7 @@ Initialize the given data structure (@code{data}) with 
all the given
 values. Note that the raw input @code{gal_data_t} must already have been
 allocated before calling this function. For a description of each variable
 see @ref{Generic data container}. It will set the values and do the
-necessary allocations. If they aren't @code{NULL}, all input arrays
+necessary allocations. If they are not @code{NULL}, all input arrays
 (@code{dsize}, @code{wcs}, @code{name}, @code{unit}, @code{comment}) are
 separately copied (allocated) by this function for usage in @code{data}, so
 you can safely use one value to initialize many datasets or use statically
@@ -30093,7 +30086,7 @@ this function will behave as follows:
 
 @table @code
 @item out->size < in->size
-This function won't re-allocate the necessary space, it will abort with an
+This function will not re-allocate the necessary space, it will abort with an
 error, so please check before calling this function.
 
 @item out->size > in->size
@@ -30136,7 +30129,7 @@ dimensions that has @code{dsize} elements along each 
dimension.
 @end deftypefun
 
 @deftypefun int gal_dimension_is_different (gal_data_t @code{*first}, 
gal_data_t @code{*second})
-Return @code{1} (one) if the two datasets don't have the same size along
+Return @code{1} (one) if the two datasets do not have the same size along
 all dimensions. This function will also return @code{1} when the number of
 dimensions of the two datasets are different.
 @end deftypefun
@@ -30298,7 +30291,7 @@ inclusive, so in this example, a connectivity of 
@code{2} will also include
 connectivity @code{1} neighbors.
 @item dinc
 An array keeping the length necessary to increment along each
-dimension. You can make this array with the following function. Just don't
+dimension. You can make this array with the following function. Just do not
 forget to free the array after you are done with it:
 
 @example
@@ -30338,10 +30331,10 @@ element in a 1000 element array, all 997 subsequent 
elements have to pulled
 back by one position, the reverse will happen if you need to add an
 element.
 
-In many contexts such situations never come up, for example you don't want
+In many contexts such situations never come up, for example you do not want
 to shift all the pixels in an image by one or two pixels from some random
 position in the image: their positions have scientific value. But in other
-contexts you will find your self frequently adding/removing an a-priori
+contexts you will find yourself frequently adding/removing an a-priori
 unknown number of elements. Linked lists (or @emph{lists} for short) are
 the data-container of choice in such situations. As in a chain, each
 @emph{node} in a list is an independent C structure, keeping its own data
@@ -30477,7 +30470,7 @@ Return a pointer to the last node in @code{list}.
 Print the strings within each node of @code{*list} on the standard outputin 
the same order that they are stored.
 Each string is printed on one line.
 This function is mainly good for checking/debugging your program.
-For program outputs, its best to make your own implementation with a better, 
more user-friendly, format.
+For program outputs, it is best to make your own implementation with a better, 
more user-friendly, format.
 For example the following code snippet.
 
 @example
@@ -30507,7 +30500,7 @@ Concatenate (append) the input list of strings into a 
single space-separated str
 The space for the output string is allocated by this function and should be 
freed when you have finished with it.
 
 If there is any SPACE characters in any of the elements, a back-slash 
(@code{\}) will be printed before the SPACE character.
-This is necessary, otherwise, a function like @code{gal_list_str_extract} 
won't be able to extract the elements back into separate elements in a list.
+This is necessary, otherwise, a function like @code{gal_list_str_extract} will 
not be able to extract the elements back into separate elements in a list.
 @end deftypefun
 
 
@@ -30570,7 +30563,7 @@ Return a pointer to the last node in @code{list}.
 Print the integers within each node of @code{*list} on the standard output
 in the same order that they are stored. Each integer is printed on one
 line. This function is mainly good for checking/debugging your program. For
-program outputs, its best to make your own implementation with a better,
+program outputs, it is best to make your own implementation with a better,
 more user-friendly format. For example the following code snippet. You can
 also modify it to print all values in one line, etc, depending on the
 context of your program.
@@ -30619,7 +30612,7 @@ respectively (see @ref{Numeric data types}).
 
 @code{size_t} is the default compiler type to index an array (recall that
 an array index in C is just a pointer increment of a given
-@emph{size}). Since it is unsigned, its a great type for counting (where
+@emph{size}). Since it is unsigned, it is a great type for counting (where
 negative is not defined), you are always sure it will never exceed the
 system's (virtual) memory and since its name has the word ``size'' inside
 it, it provides a good level of documentation@footnote{So you know that a
@@ -30675,7 +30668,7 @@ Return a pointer to the last node in @code{list}.
 Print the values within each node of @code{*list} on the standard output in
 the same order that they are stored. Each integer is printed on one
 line. This function is mainly good for checking/debugging your program. For
-program outputs, its best to make your own implementation with a better,
+program outputs, it is best to make your own implementation with a better,
 more user-friendly format. For example, the following code snippet. You can
 also modify it to print all values in one line, etc, depending on the
 context of your program.
@@ -30764,7 +30757,7 @@ Return a pointer to the last node in @code{list}.
 Print the values within each node of @code{*list} on the standard output in
 the same order that they are stored. Each floating point number is printed
 on one line. This function is mainly good for checking/debugging your
-program. For program outputs, its best to make your own implementation with
+program. For program outputs, it is best to make your own implementation with
 a better, more user-friendly format. For example, in the following code
 snippet. You can also modify it to print all values in one line, etc,
 depending on the context of your program.
@@ -30856,7 +30849,7 @@ Return a pointer to the last node in @code{list}.
 Print the values within each node of @code{*list} on the standard output in
 the same order that they are stored. Each floating point number is printed
 on one line. This function is mainly good for checking/debugging your
-program. For program outputs, its best to make your own implementation with
+program. For program outputs, it s best to make your own implementation with
 a better, more user-friendly format. For example, in the following code
 snippet. You can also modify it to print all values in one line, etc,
 depending on the context of your program.
@@ -31009,9 +31002,9 @@ this function, @code{list} will point to the next node 
(which has a larger
 @deftypefun void gal_list_osizet_to_sizet_free (gal_list_osizet_t @code{*in}, 
gal_list_sizet_t @code{**out})
 Convert the ordered list of @code{size_t}s into an ordinary @code{size_t}
 linked list. This can be useful when all the elements have been added and
-you just need to pop-out elements and don't care about the sorting values
+you just need to pop-out elements and do not care about the sorting values
 any more. After the conversion is done, this function will free the input
-list. Note that the @code{out} list doesn't have to be empty. If it already
+list. Note that the @code{out} list does not have to be empty. If it already
 contains some nodes, the new nodes will be added on top of them.
 @end deftypefun
 
@@ -31196,11 +31189,11 @@ See the description of 
@code{gal_fits_file_recognized} for more (@ref{FITS macro
 @deftypefun gal_data_t gal_array_read (char @code{*filename}, char 
@code{*extension}, gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int 
@code{quietmmap})
 Read the array within the given extension (@code{extension}) of
 @code{filename}, or the @code{lines} list (see below). If the array is
-larger than @code{minmapsize} bytes, then it won't be read into RAM, but a
+larger than @code{minmapsize} bytes, then it will not be read into RAM, but a
 file on the HDD/SSD (no difference for the programmer). Messages about the
 memory-mapped file can be disabled with @code{quietmmap}.
 
-@code{extension} will be ignored for files that don't support them (for
+@code{extension} will be ignored for files that do not support them (for
 example JPEG or text). For FITS files, @code{extension} can be a number or
 a string (name of the extension), but for TIFF files, it has to be
 number. In both cases, counting starts from zero.
@@ -31208,7 +31201,7 @@ number. In both cases, counting starts from zero.
 For multi-channel formats (like RGB images in JPEG or TIFF), this function
 will return a @ref{List of gal_data_t}: one data structure per
 channel. Thus if you just want a single array (and want to check if the
-user hasn't given a multi-channel input), you can check the @code{next}
+user has not given a multi-channel input), you can check the @code{next}
 pointer of the returned @code{gal_data_t}.
 
 @code{lines} is a list of strings with each node representing one line
@@ -31261,7 +31254,7 @@ Currently only plain text (see @ref{Gnuastro text table 
format}) and FITS
 (ASCII and binary) tables are supported by Gnuastro. However, the low-level
 table infra-structure is written such that accommodating other formats is
 also possible and in future releases more formats will hopefully be
-supported. Please don't hesitate to suggest your favorite format so it can
+supported. Please do not hesitate to suggest your favorite format so it can
 be implemented when possible.
 
 @deffn  Macro GAL_TABLE_DEF_WIDTH_STR
@@ -31275,7 +31268,7 @@ be implemented when possible.
 @cindex @code{printf}
 The default width and precision for generic types to use in writing numeric
 types into a text file (plain text and FITS ASCII tables). When the dataset
-doesn't have any pre-set width and precision (see @code{disp_width} and
+does not have any pre-set width and precision (see @code{disp_width} and
 @code{disp_precision} in @ref{Generic data container}) these will be
 directly used in C's @code{printf} command to write the number as a string.
 @end deffn
@@ -31478,14 +31471,14 @@ memory, will require a special sequence of CFITSIO 
function calls which can
 be inconvenient and buggy to manage in separate locations. To ease this
 process, Gnuastro's library provides wrappers for CFITSIO functions. With
 these, it much easier to read, write, or modify FITS file data, header
-keywords and extensions. Hence, if you feel these functions don't exactly
+keywords and extensions. Hence, if you feel these functions do not exactly
 do what you want, we strongly recommend reading the CFITSIO manual to use
 its great features directly (afterwards, send us your wrappers so we can
 include it here for others to benefit also).
 
 All the functions and macros introduced in this section are declared in
 @file{gnuastro/fits.h}.  When you include this header, you are also
-including CFITSIO's @file{fitsio.h} header. So you don't need to explicitly
+including CFITSIO's @file{fitsio.h} header. So you do not need to explicitly
 include @file{fitsio.h} anymore and can freely use any of its macros or
 functions in your code along with those discussed here.
 
@@ -31528,20 +31521,20 @@ CFITSIO.
 
 @deftypefun int gal_fits_suffix_is_fits (char @code{*suffix})
 Similar to @code{gal_fits_name_is_fits}, but only for the suffix. The
-suffix doesn't have to start with `@key{.}': this function will return
+suffix does not have to start with `@key{.}': this function will return
 @code{1} (one) for both @code{fits} and @code{.fits}.
 @end deftypefun
 
 @deftypefun int gal_fits_file_recognized (char @code{*name})
 Return @code{1} if the given file name (possibly including its contents) is a 
FITS file.
-This is necessary in when the contents of a FITS file do follow the FITS 
standard, but it the file doesn't have a Gnuastro-recognized FITS suffix.
+This is necessary in when the contents of a FITS file do follow the FITS 
standard, but it the file does not have a Gnuastro-recognized FITS suffix.
 Therefore, it will first call @code{gal_fits_name_is_fits}, if the result is 
negative, then this function will attempt to open the file with CFITSIO and if 
it works, it will close it again and return 1.
 In the process of opening the file, CFITSIO will just to open the file, no 
reading will take place, so it should have minimal CPU footprint.
 @end deftypefun
 
 @deftypefun {char *} gal_fits_name_save_as_string (char @code{*filename}, char 
@code{*hdu})
 If the name is a FITS name, then put a @code{(hdu: ...)} after it and
-return the string. If it isn't a FITS file, just print the name, if
+return the string. If it is not a FITS file, just print the name, if
 @code{filename==NULL}, then return the string @code{stdin}. Note that the
 output string's space is allocated.
 
@@ -31609,7 +31602,7 @@ Note that @code{fitsfile} is defined in CFITSIO's 
@code{fitsio.h} which is autom
 
 @deftypefun {fitsfile *} gal_fits_open_to_write (char @code{*filename})
 If @file{filename} exists, open it and return the @code{fitsfile} pointer that 
corresponds to it.
-If @file{filename} doesn't exist, the file will be created which contains a 
blank first extension and the pointer to its next extension will be returned.
+If @file{filename} does not exist, the file will be created which contains a 
blank first extension and the pointer to its next extension will be returned.
 @end deftypefun
 
 @deftypefun size_t gal_fits_hdu_num (char @code{*filename})
@@ -31636,7 +31629,7 @@ Return the format of the HDU as one of CFITSIO's 
recognized macros:
 @deftypefun int gal_fits_hdu_is_healpix (fitsfile @code{*fptr})
 @cindex HEALPix
 Return @code{1} if the dataset may be a HEALpix grid and @code{0} otherwise.
-Technically, it is considered to be a HEALPix if the HDU isn't an ASCII table, 
and has the @code{NSIDE}, @code{FIRSTPIX} and @code{LASTPIX}.
+Technically, it is considered to be a HEALPix if the HDU is not an ASCII 
table, and has the @code{NSIDE}, @code{FIRSTPIX} and @code{LASTPIX}.
 @end deftypefun
 
 @deftypefun {fitsfile *} gal_fits_hdu_open (char @code{*filename}, char 
@code{*hdu}, int @code{iomode}, int @code{exitonerror})
@@ -31647,8 +31640,8 @@ The string in @code{hdu} will be appended to 
@file{filename} in square brackets
 You can use any formatting for the @code{hdu} that is acceptable to CFITSIO.
 See the description under @option{--hdu} in @ref{Input output options} for 
more.
 
-If @code{exitonerror!=0} and the given HDU can't be opened for any reason, the 
function will exit the program, and print an informative message.
-Otherwise, when the HDU can't be opened, it will just return a NULL pointer.
+If @code{exitonerror!=0} and the given HDU cannot be opened for any reason, 
the function will exit the program, and print an informative message.
+Otherwise, when the HDU cannot be opened, it will just return a NULL pointer.
 @end deftypefun
 
 @deftypefun {fitsfile *} gal_fits_hdu_open_format (char @code{*filename}, char 
@code{*hdu}, int @code{img0_tab1})
@@ -31687,7 +31680,7 @@ gal_data_t}. To write FITS keywords we define the
 @cindex last-in-first-out
 @cindex first-in-first-out
 Structure for writing FITS keywords. This structure is used for one keyword
-and you don't need to set all elements. With the @code{next} element, you
+and you do not need to set all elements. With the @code{next} element, you
 can link it to another keyword thus creating a linked list to add any
 number of keywords easily and at any step during your program (see
 @ref{Linked lists} for an introduction on lists). See the functions below
@@ -31744,7 +31737,7 @@ CFITSIO. When reading an array with CFITSIO, you can use
 
 @deftypefun void gal_fits_key_clean_str_value (char @code{*string})
 Remove the single quotes and possible extra spaces around the keyword values
-that CFITSIO returns when reading a string keyword. CFITSIO doesn't remove
+that CFITSIO returns when reading a string keyword. CFITSIO does not remove
 the two single quotes around the string value of a keyword. Hence the
 strings it reads are like: @code{'value '}, or
 @code{'some_very_long_value'}. To use the value during your processing, it
@@ -31756,7 +31749,7 @@ string.
 @deftypefun {char *} gal_fits_key_date_to_struct_tm (char @code{*fitsdate}, 
struct tm @code{*tp})
 @cindex Date: FITS format
 Parse @code{fitsdate} as a FITS date format string (most generally: 
@code{YYYY-MM-DDThh:mm:ss.ddd...}) into the C library's broken-down time 
structure, or @code{struct tm} (declared in @file{time.h}) and return a pointer 
to a newly allocated array for the sub-second part of the format 
(@code{.ddd...}).
-Therefore it needs to be freed afterwards (if it isn't @code{NULL})
+Therefore it needs to be freed afterwards (if it is not @code{NULL})
 When there is no sub-second portion, this pointer will be @code{NULL}.
 
 This is a relatively low-level function, an easier function to use is 
@code{gal_fits_key_date_to_seconds} which will return the sub-seconds as double 
precision floating point.
@@ -31771,18 +31764,18 @@ In this case (following the GNU C Library), this 
option will make the following
 @cindex Unix epoch time
 @cindex Epoch time, Unix
 Return the Unix epoch time (number of seconds that have passed since 00:00:00 
Thursday, January 1st, 1970) corresponding to the FITS date format string 
@code{fitsdate} (see description of @code{gal_fits_key_date_to_struct_tm} 
above).
-This function will return @code{GAL_BLANK_SIZE_T} if the broken-down time 
couldn't be converted to seconds.
+This function will return @code{GAL_BLANK_SIZE_T} if the broken-down time 
could not be converted to seconds.
 
 The Unix epoch time is in units of seconds, but the FITS date format allows 
sub-second accuracy.
 The last two arguments are for the optional sub-second portion.
-If you don't want sub-second information, just set the second argument to 
@code{NULL}.
+If you do not want sub-second information, just set the second argument to 
@code{NULL}.
 
 If @code{fitsdate} contains sub-second accuracy and @code{subsecstr!=NULL}, 
then the starting of the sub-second part's string is stored in @code{subsecstr} 
(malloc'ed), and @code{subsec} will be the corresponding numerical value 
(between 0 and 1, in double precision floating point).
 So to avoid leaking memory, if a sub-second string is requested, it must be 
freed after calling this function.
-When a sub-second string doesn't exist (and it is requested), then a value of 
@code{NULL} and NaN will be written in @code{*subsecstr} and @code{*subsec} 
respectively.
+When a sub-second string does not exist (and it is requested), then a value of 
@code{NULL} and NaN will be written in @code{*subsecstr} and @code{*subsec} 
respectively.
 
 This is a very useful function for operations on the FITS date values, for 
example sorting FITS files by their dates, or finding the time difference 
between two FITS files.
-The advantage of working with the Unix epoch time is that you don't have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
+The advantage of working with the Unix epoch time is that you do not have to 
worry about calendar details (for example the number of days in different 
months, or leap years, etc).
 @end deftypefun
 
 @deftypefun void gal_fits_key_read_from_ptr (fitsfile @code{*fptr}, gal_data_t 
@code{*keysll}, int @code{readcomment}, int @code{readunit})
@@ -31815,7 +31808,7 @@ gal_fits_key_read_from_ptr(fptr, keysll, 0, 0);
 
 /* Use the values as you like... */
 
-/* Free all the allocated spaces. Note that `name' wasn't allocated
+/* Free all the allocated spaces. Note that `name' was not allocated
    in this example, so we should explicitly set it to NULL before
    calling `gal_data_array_free'. */
 for(i=0;i<N;++i) keysll[i].name=NULL;
@@ -31829,7 +31822,7 @@ Strings need special consideration: the reason is that 
generally, @code{gal_data
 Hence when reading a string value, two allocations may be done by this 
function (one if @code{array!=NULL}).
 
 Therefore, when using the values of strings after this function, 
@code{keysll[i].array} must be interpreted as @code{char **}: one allocation 
for the pointer, one for the actual characters.
-If you use something like the example, above you don't have to worry about the 
freeing, @code{gal_data_array_free} will free both allocations.
+If you use something like the example, above you do not have to worry about 
the freeing, @code{gal_data_array_free} will free both allocations.
 So to read a string, one easy way would be the following:
 
 @example
@@ -31842,7 +31835,7 @@ If CFITSIO is unable to read a keyword for any reason 
the @code{status} element
 If it is zero, then the keyword was found and successfully read.
 Otherwise, it is a CFITSIO status value.
 You can use CFITSIO's error reporting tools or @code{gal_fits_io_error} (see 
@ref{FITS macros errors filenames}) for reporting the reason of the failure.
-A tip: when the keyword doesn't exist, CFITSIO's status value will be 
@code{KEY_NO_EXIST}.
+A tip: when the keyword does not exist, CFITSIO's status value will be 
@code{KEY_NO_EXIST}.
 
 CFITSIO will start searching for the keywords from the last place in the 
header that it searched for a keyword.
 So it is much more efficient if the order that you ask for keywords is based 
on the order they are stored in the header.
@@ -31903,7 +31896,7 @@ Reverse the input list of keywords.
 Add two lines of ``title'' keywords to the given CFITSIO @code{fptr} pointer.
 The first line will be blank and the second will have the string in 
@code{title} roughly in the middle of the line (a fixed distance from the start 
of the keyword line).
 A title in the list of keywords helps in classifying the keywords into groups 
and inspecting them by eye.
-If @code{title==NULL}, this function won't do anything.
+If @code{title==NULL}, this function will not do anything.
 @end deftypefun
 
 @deftypefun void gal_fits_key_write_filename (char @code{*keynamebase}, char 
@code{*filename}, gal_fits_list_key_t @code{**list}, int @code{top1end0}, int 
@code{quiet})
@@ -31917,7 +31910,7 @@ This creates problems with file names (which include 
directories) because file n
 Therefore, when @code{filename} is longer than the maximum length of a FITS 
keyword value, this function will break it into several keywords (breaking up 
the string on directory separators).
 So the full file/directory address (including directories) can be longer than 
69 characters.
 However, if a single file or directory name (within a larger address) is 
longer than 69 characters, this function will truncate the name and print a 
warning.
-If @code{quiet!=0}, then the warning won't be printed.
+If @code{quiet!=0}, then the warning will not be printed.
 
 @end deftypefun
 
@@ -32034,7 +32027,7 @@ along each dimension as an allocated array with 
@code{*ndim} elements.
 Read the contents of the @code{hdu} extension/HDU of @code{filename} into a
 Gnuastro generic data container (see @ref{Generic data container}) and
 return it. If the necessary space is larger than @code{minmapsize}, then
-don't keep the data in RAM, but in a file on the HDD/SSD. For more on
+do not keep the data in RAM, but in a file on the HDD/SSD. For more on
 @code{minmapsize} and @code{quietmmap} see the description under the same
 name in @ref{Generic data container}.
 
@@ -32069,7 +32062,7 @@ convolution kernel must have an odd size along all 
dimensions, it must not
 have blank (NaN in floating point types) values and must be flipped around
 the center to make the proper convolution (see @ref{Convolution
 process}). If there are blank values, this function will change the blank
-values to @code{0.0}. If the input image doesn't have the other two
+values to @code{0.0}. If the input image does not have the other two
 requirements, this function will abort with an error describing the
 condition to the user. The finally returned dataset will have a
 @code{float32} type.
@@ -32078,7 +32071,7 @@ condition to the user. The finally returned dataset 
will have a
 @deftypefun {fitsfile *} gal_fits_img_write_to_ptr (gal_data_t @code{*input}, 
char @code{*filename})
 Write the @code{input} dataset into a FITS file named @file{filename} and
 return the corresponding CFITSIO @code{fitsfile} pointer. This function
-won't close @code{fitsfile}, so you can still add other extensions to it
+will not close @code{fitsfile}, so you can still add other extensions to it
 after this function or make other modifications.
 @end deftypefun
 
@@ -32116,7 +32109,7 @@ values.
 
 @item
 WCSLIB's header writing function is not thread safe. So when writing FITS
-images in parallel, we can't write the header keywords in each thread.
+images in parallel, we cannot write the header keywords in each thread.
 @end itemize
 @end deftypefun
 
@@ -32171,7 +32164,7 @@ However, this only happens if CFITSIO was configured 
with @option{--enable-reent
 This test has been done at Gnuastro's configuration time; if so, 
@code{GAL_CONFIG_HAVE_FITS_IS_REENTRANT} will have a value of 1, otherwise, it 
will have a value of 0.
 For more on this macro, see @ref{Configuration information}).
 
-If the necessary space for each column is larger than @code{minmapsize}, don't 
keep it in the RAM, but in a file in the HDD/SSD.
+If the necessary space for each column is larger than @code{minmapsize}, do 
not keep it in the RAM, but in a file in the HDD/SSD.
 For more on @code{minmapsize} and @code{quietmmap}, see the description under 
the same name in @ref{Generic data container}.
 
 Each column will have @code{numrows} rows and @code{colinfo} contains any 
further information about the columns (returned by @code{gal_fits_tab_info}, 
described above).
@@ -32274,7 +32267,7 @@ To be generic, it is recommended to use 
@code{gal_table_info} which will allow g
 
 @deftypefun {gal_data_t *} gal_txt_table_read (char @code{*filename}, 
gal_list_str_t @code{*lines}, size_t @code{numrows}, gal_data_t 
@code{*colinfo}, gal_list_sizet_t @code{*indexll}, size_t @code{minmapsize}, 
int @code{quietmmap})
 Read the columns given in the list @code{indexll} from a plain text file 
(@code{filename}) or list of strings (@code{lines}), into a linked list of data 
structures (see @ref{List of size_t} and @ref{List of gal_data_t}).
-If the necessary space for each column is larger than @code{minmapsize}, don't 
keep it in the RAM, but in a file on the HDD/SSD.
+If the necessary space for each column is larger than @code{minmapsize}, do 
not keep it in the RAM, but in a file on the HDD/SSD.
 For more one @code{minmapsize} and @code{quietmmap}, see the description under 
the same name in @ref{Generic data container}.
 
 @code{lines} is a list of strings with each node representing one line 
(including the new-line character), see @ref{List of strings}.
@@ -32287,7 +32280,7 @@ It is recommended to use @code{gal_table_read} for 
generic reading of tables in
 
 @deftypefun {gal_data_t *} gal_txt_image_read (char @code{*filename}, 
gal_list_str_t @code{*lines}, size_t @code{minmapsize}, int @code{quietmmap})
 Read the 2D plain text dataset in file (@code{filename}) or list of strings 
(@code{lines}) into a dataset and return the dataset.
-If the necessary space for the image is larger than @code{minmapsize}, don't 
keep it in the RAM, but in a file on the HDD/SSD.
+If the necessary space for the image is larger than @code{minmapsize}, do not 
keep it in the RAM, but in a file on the HDD/SSD.
 For more on @code{minmapsize} and @code{quietmmap}, see the description under 
the same name in @ref{Generic data container}.
 
 @code{lines} is a list of strings with each node representing one line 
(including the new-line character), see @ref{List of strings}.
@@ -32433,7 +32426,7 @@ The display width of the JPEG file in units of 
centimeters (to suggest to viewer
 
 The Encapsulated PostScript (EPS) format is commonly used to store images (or 
individual/single-page parts of a document) in the PostScript documents.
 For a more complete introduction, please see @ref{Recognized file formats}.
-To provide high quality graphics, the Postscript language is a vectorized 
format, therefore pixels (elements of a ``rasterized'' format) aren't defined 
in their context.
+To provide high quality graphics, the Postscript language is a vectorized 
format, therefore pixels (elements of a ``rasterized'' format) are not defined 
in their context.
 
 To display rasterized images, PostScript does allow arrays of pixels.
 However, since the over-all EPS file may contain many vectorized elements (for 
example borders, text, or other lines over the text) and interpreting them is 
not trivial or necessary within Gnuastro's scope, Gnuastro only provides some 
functions to write a dataset (in the @code{gal_data_t} format, see @ref{Generic 
data container}) into EPS.
@@ -32462,7 +32455,7 @@ Name of column that the required property will be read 
from.
 @deffnx Macro GAL_EPS_MARK_DEFAULT_FONT
 @deffnx Macro GAL_EPS_MARK_DEFAULT_FONTSIZE
 Default values for the various mark properties.
-These constants will be used if the caller hasn't provided any of the given 
property.
+These constants will be used if the caller has not provided any of the given 
property.
 @end deffn
 
 @deftypefun {int} gal_eps_name_is_eps (char @code{*name})
@@ -32533,7 +32526,7 @@ Therefore, the order of columns in the list is 
irrelevant!
 The portable document format (PDF) has arguably become the most common format 
used for distribution of documents.
 In practice, a PDF file is just a compiled PostScript file.
 For a more complete introduction, please see @ref{Recognized file formats}.
-To provide high quality graphics, the PDF is a vectorized format, therefore 
pixels (elements of a ``rasterized'' format) aren't defined in their context.
+To provide high quality graphics, the PDF is a vectorized format, therefore 
pixels (elements of a ``rasterized'' format) are not defined in their context.
 As a result, similar to @ref{EPS files}, Gnuastro only writes datasets to a 
PDF file, not vice-versa.
 
 @deftypefun {int} gal_pdf_name_is_pdf (char @code{*name})
@@ -32556,7 +32549,7 @@ A border can helpful when importing the PDF file into a 
document.
 
 This function is just a wrapper for the @code{gal_eps_write} function in 
@ref{EPS files}.
 After making the EPS file, Ghostscript (with a version of 9.10 or above, see 
@ref{Optional dependencies}) will be used to compile the EPS file to a PDF file.
-Therefore if GhostScript doesn't exist, doesn't have the proper version, or 
fails for any other reason, the EPS file will remain.
+Therefore if GhostScript does not exist, does not have the proper version, or 
fails for any other reason, the EPS file will remain.
 It can be used to find the cause, or use another converter or PostScript 
compiler.
 
 @cindex PDF
@@ -32618,7 +32611,7 @@ More information is given in the documentation of 
@code{dis.h}, from the WCSLIB
 @cindex Coordinate system: Equatorial
 @cindex Coordinate system: Supergalactic
 Recognized WCS coordinate systems in Gnuastro.
-@code{EQ} and @code{EC} stand for the Equatorial and Ecliptic coordinate 
systems.
+@code{EQ} and @code{EC} stand for the EQuatorial and ECliptic coordinate 
systems.
 In the equatorial and ecliptic coordinates, @code{B1950} stands for the 
Besselian 1950 epoch and @code{J2000} stands for the Julian 2000 epoch.
 @end deffn
 
@@ -32646,14 +32639,14 @@ status = wcsvfree(&nwcs,&wcs);
 
 The @code{linearmatrix} argument takes one of three values: @code{0}, 
@code{GAL_WCS_LINEAR_MATRIX_PC} and @code{GAL_WCS_LINEAR_MATRIX_CD}.
 It will determine the format of the WCS when it is later written to file with 
@code{gal_wcs_write} or @code{gal_wcs_write_in_fitsptr} (which is called by 
@code{gal_fits_img_write})
-So if you don't want to write the WCS into a file later, just give it a value 
of @code{0}.
+So if you do not want to write the WCS into a file later, just give it a value 
of @code{0}.
 For more on the difference between these modes, see the description of 
@option{--wcslinearmatrix} in @ref{Input output options}.
 
-If you don't want to search the full FITS header for WCS-related FITS keywords 
(for example due to conflicting keywords), but only a specific range of the 
header keywords you can use the @code{hstartwcs} and @code{hendwcs} arguments 
to specify the keyword number range (counting from zero).
+If you do not want to search the full FITS header for WCS-related FITS 
keywords (for example due to conflicting keywords), but only a specific range 
of the header keywords you can use the @code{hstartwcs} and @code{hendwcs} 
arguments to specify the keyword number range (counting from zero).
 If @code{hendwcs} is larger than @code{hstartwcs}, then only keywords in the 
given range will be checked.
 Hence, to ignore this feature (and search the full FITS header), give both 
these arguments the same value.
 
-If the WCS information couldn't be read from the FITS file, this function will 
return a @code{NULL} pointer and put a zero in @code{nwcs}.
+If the WCS information could not be read from the FITS file, this function 
will return a @code{NULL} pointer and put a zero in @code{nwcs}.
 A WCSLIB error message will also be printed in @code{stderr} if there was an 
error.
 
 This function is just a wrapper over WCSLIB's @code{wcspih} function which is 
not thread-safe.
@@ -32713,10 +32706,10 @@ Remove the given FITS dimension from the given 
@code{wcs} structure.
 Create a WCSLIB @code{wcsprm} structure for @code{tile} using WCS
 parameters of the tile's allocated block dataset, see @ref{Tessellation
 library} for the definition of tiles. If @code{tile} already has a WCS
-structure, this function won't do anything.
+structure, this function will not do anything.
 
 In many cases, tiles are created for internal/low-level processing. Hence
-for performance reasons, when creating the tiles they don't have any WCS
+for performance reasons, when creating the tiles they do not have any WCS
 structure. When needed, this function can be used to add a WCS structure to
 each tile tile by copying the WCS structure of its block and correcting the
 reference point's coordinates within the tile.
@@ -32773,7 +32766,7 @@ Read the given WCS structure and return its coordinate 
system as one of Gnuastro
 @deftypefun {struct wcsprm *} gal_wcs_coordsys_convert (struct wcsprm 
@code{*inwcs}, int @code{coordsysid})
 Return a newly allocated WCS structure with the @code{coordsysid} coordinate 
system identifier.
 The Gnuastro WCS distortion identifiers are defined in the 
@code{GAL_WCS_COORDSYS_*} macros mentioned above.
-Since the returned dataset is newly allocated, if you don't need the original 
dataset after this, use the WCSLIB library function @code{wcsfree} to free the 
input, for example @code{wcsfree(inwcs)}.
+Since the returned dataset is newly allocated, if you do not need the original 
dataset after this, use the WCSLIB library function @code{wcsfree} to free the 
input, for example @code{wcsfree(inwcs)}.
 @end deftypefun
 
 @deftypefun int gal_wcs_distortion_from_string (char @code{*distortion})
@@ -32791,7 +32784,7 @@ The sting-based distortion identifiers have three 
characters and are all in capi
 @deftypefun {int} gal_wcs_distortion_identify (struct wcsprm @code{*wcs})
 Returns the Gnuastro identifier for the distortion of the input WCS structure.
 The returned value is one of the @code{GAL_WCS_DISTORTION_*} macros defined 
above.
-When the input pointer to a structure is @code{NULL}, or it doesn't contain a 
distortion, the returned value will be @code{GAL_WCS_DISTORTION_INVALID}.
+When the input pointer to a structure is @code{NULL}, or it does not contain a 
distortion, the returned value will be @code{GAL_WCS_DISTORTION_INVALID}.
 @end deftypefun
 
 @cindex SIP WCS distortion
@@ -32804,7 +32797,7 @@ The available conversions in this function will grow.
 Currently it only supports converting TPV to SIP and vice versa, following the 
recipe of Shupe et al. (2012)@footnote{Proc. of SPIE Vol. 8451  84511M-1. 
@url{https://doi.org/10.1117/12.925460}, also available at 
@url{http://web.ipac.caltech.edu/staff/shupe/reprints/SIP_to_PV_SPIE2012.pdf}.}.
 Please get in touch with us if you need other types of conversions.
 
-For some conversions, direct analytical conversions don't exist.
+For some conversions, direct analytical conversions do not exist.
 It is thus necessary to model and fit the two types.
 In such cases, it is also necessary to specify the @code{fitsize} array that 
is the size of the array along each C-ordered dimension, so you can simply pass 
the @code{dsize} element of your @code{gal_data_t} dataset, see @ref{Generic 
data container}.
 Currently this is only necessary when converting TPV to SIP.
@@ -32833,7 +32826,7 @@ points is calculated with the equation below.
 @dispmath {\cos(d)=\sin(d_1)\sin(d_2)+\cos(d_1)\cos(d_2)\cos(r_1-r_2)}
 
 However, since the pixel scales are usually very small numbers, this
-function won't use that direct formula. It will be use the
+function will not use that direct formula. It will be use the
 @url{https://en.wikipedia.org/wiki/Haversine_formula, Haversine formula}
 which is better considering floating point errors:
 
@@ -32843,7 +32836,7 @@ which is better considering floating point errors:
 @deftypefun {double *} gal_wcs_pixel_scale (struct wcsprm @code{*wcs})
 Return the pixel scale for each dimension of @code{wcs} in degrees. The
 output is an allocated array of double precision floating point type with
-one element for each dimension. If its not successful, this function will
+one element for each dimension. If it is not successful, this function will
 return @code{NULL}.
 @end deftypefun
 
@@ -32932,7 +32925,7 @@ Use the pre-defined environment variable for setting 
the random number generator
 For more on random number generation in Gnuastro see @ref{Generating random 
numbers}.
 
 @item GAL_ARITHMETIC_FLAG_QUIET
-Don't print any warnings or messages for operators that may benefit from it.
+Do Not print any warnings or messages for operators that may benefit from it.
 For example by default the @code{mknoise-sigma} operator prints the random 
number generator function and seed that it used (in case the user wants to 
reproduce this result later).
 By activating this bit flag to the call, that extra information is not printed 
on the command-line.
 
@@ -32983,7 +32976,7 @@ When @code{gal_arithmetic} is called with this 
operator, it will only expect thr
 The second is the condition dataset (that must have a @code{uint8_t} or 
@code{unsigned char} type), and the third is the value to be used if condition 
is non-zero.
 
 As a result, note that the order of operands when calling 
@code{gal_arithmetic} with @code{GAL_ARITHMETIC_OP_WHERE} is the opposite of 
running Gnuastro's Arithmetic program with the @code{where} operator (see 
@ref{Arithmetic}).
-This is because the latter uses the reverse-Polish notation which isn't 
necessary when calling a function (see @ref{Reverse polish notation}).
+This is because the latter uses the reverse-Polish notation which is not 
necessary when calling a function (see @ref{Reverse polish notation}).
 @end deffn
 
 @deffn  Macro GAL_ARITHMETIC_OP_SQRT
@@ -32991,7 +32984,7 @@ This is because the latter uses the reverse-Polish 
notation which isn't necessar
 @deffnx Macro GAL_ARITHMETIC_OP_LOG10
 Unary operator functions for calculating the square root (@mymath{\sqrt{i}}), 
@mymath{ln(i)} and @mymath{log(i)} mathematical operators on each element of 
the input dataset.
 The returned dataset will have a floating point type, but its precision is 
determined from the input: if the input is a 64-bit floating point, the output 
will also be 64-bit.
-Otherwise, the returned dataset will be 32-bit floating point: you don't gain 
precision by using these operators, but you gain in operating speed if you use 
the sufficient precision.
+Otherwise, the returned dataset will be 32-bit floating point: you do not gain 
precision by using these operators, but you gain in operating speed if you use 
the sufficient precision.
 See @ref{Numeric data types} for more on the precision of floating point 
numbers to help in selecting your required floating point precision.
 
 If you want your output to be 64-bit floating point but your input is a 
different type, you can convert the input to a 64-bit floating point type with 
@code{gal_data_copy_to_new_type} or @code{gal_data_copy_to_new_type_free}(see 
@ref{Copying datasets}).
@@ -33277,7 +33270,7 @@ main(void)
   /* Clean up. Due to the in-place flag (in
    * 'GAL_ARITHMETIC_FLAGS_BASIC'), 'out1' and 'out2' point to the
    * same array in memory and due to the freeing flag, any input
-   * dataset(s) that weren't returned have been freed internally by
+   * dataset(s) that were not returned have been freed internally by
    * 'gal_arithmetic'. Therefore it is only necessary to free
    * 'out2': all other allocated spaces have been freed internally.
    * before reaching this point. */
@@ -33289,14 +33282,14 @@ main(void)
 @end example
 
 As you see above, you can feed the returned dataset from one call of 
@code{gal_arithmetic} to another call.
-The advantage of using @code{gal_arithmetic} (as opposed to manually writing a 
@code{for} or @code{while} loop and doing the operation with the @code{+} 
operator and @code{log()} function yourself), is that you don't have to worry 
about the type of the input data (for a list of acceptable data types in 
Gnuastro, see @ref{Library data types}).
+The advantage of using @code{gal_arithmetic} (as opposed to manually writing a 
@code{for} or @code{while} loop and doing the operation with the @code{+} 
operator and @code{log()} function yourself), is that you do not have to worry 
about the type of the input data (for a list of acceptable data types in 
Gnuastro, see @ref{Library data types}).
 Arithmetic will automatically deal with the data types internally and choose 
the best output type depending on the operator.
 @end deftypefun
 
 @deftypefun int gal_arithmetic_set_operator (char @code{*string}, size_t 
@code{*num_operands})
 Return the operator macro/code that corresponds to @code{string}.
 The number of operands that it needs are written into the space that 
@code{*num_operands} points to.
-If the string couldn't be interpreted as an operator, this function will 
return @code{GAL_ARITHMETIC_OP_INVALID}.
+If the string could not be interpreted as an operator, this function will 
return @code{GAL_ARITHMETIC_OP_INVALID}.
 
 This function will check @code{string} with the fixed human-readable names 
(using @code{strcmp}) for the operators and return the two numbers.
 Note that @code{string} must only contain the single operator name and nothing 
else (not even any extra white space).
@@ -33321,7 +33314,7 @@ See @ref{Table input output} for the macros that can be 
given to @code{searchin}
 In many contexts, it is desirable to slice the dataset into subsets or
 tiles (overlapping or not). In such a way that you can work on each tile
 independently. One method would be to copy that region to a separate
-allocated space, but in many contexts this isn't necessary and in fact can
+allocated space, but in many contexts this is not necessary and in fact can
 be a big burden on CPU/Memory usage. The @code{block} pointer in Gnuastro's
 @ref{Generic data container} is defined for such situations: where
 allocation is not necessary. You just want to read the data or write to it
@@ -33372,7 +33365,7 @@ Since the block structure is defined as a pointer, 
arbitrary levels of
 tessellation/grid-ing are possible (@code{tile->block} may itself be a tile
 in an even larger allocated space). Therefore, just like a linked-list (see
 @ref{Linked lists}), it is important to have the @code{block} pointer of
-the largest (allocated) dataset set to @code{NULL}. Normally, you won't
+the largest (allocated) dataset set to @code{NULL}. Normally, you will not
 have to worry about this, because @code{gal_data_initialize} (and thus
 @code{gal_data_alloc}) will set the @code{block} element to @code{NULL} by
 default, just remember not to change it. You can then only change the
@@ -33399,7 +33392,7 @@ The most general application of tiles is to treat each 
independently, for
 example they may overlap, or they may not cover the full image. This
 section provides functions to help in checking/inspecting such tiles. In
 @ref{Tile grid} we will discuss functions that define/work-with a tile grid
-(where the tiles don't overlap and fully cover the input
+(where the tiles do not overlap and fully cover the input
 dataset). Therefore, the functions in this section are general and can be
 used for the tiles produced by that section also.
 
@@ -33417,11 +33410,11 @@ Put the starting and ending (end point is not 
inclusive) coordinates of
 
 @code{rel_block} (or relative-to-block) is only relevant when @code{tile}
 has an intermediate tile between it and the allocated space (like a
-channel, see @code{gal_tile_full_two_layers}). If it doesn't
+channel, see @code{gal_tile_full_two_layers}). If it does not
 (@code{tile->block} points the allocated dataset), then the value to
 @code{rel_block} is irrelevant.
 
-When @code{tile->block} is its self a larger block and @code{rel_block} is
+When @code{tile->block} is itself a larger block and @code{rel_block} is
 set to 0, then the starting and ending positions will be based on the
 position within @code{tile->block}, not the allocated space.
 @end deftypefun
@@ -33431,7 +33424,7 @@ Put the indices of the first/start and last/end pixels 
(inclusive) in a tile
 into the @code{start_end} array (that must have two elements). NOTE: this
 function stores the index of each point, not its coordinates. It will then
 return the pointer to the start of the tile in the @code{work} data
-structure (which doesn't have to be equal to @code{tile->block}.
+structure (which does not have to be equal to @code{tile->block}.
 
 The outputs of this function are defined to make it easy to parse over an
 n-dimensional tile. For example, this function is one of the most important
@@ -33498,7 +33491,7 @@ number of dimensions is irrelevant, it will be parsed 
contiguously.
 The list of input tiles (see @ref{List of gal_data_t}). Internally, it
 might be stored as an array (for example the output of
 @code{gal_tile_series_from_minmax} described above), but this function
-doesn't care, it will parse the @code{next} elements to go to the next
+does not care, it will parse the @code{next} elements to go to the next
 tile. This function will not pop-from or free the @code{tilesll}, it will
 only parse it from start to end.
 
@@ -33509,7 +33502,7 @@ initialized with blank values when this option is 
called (see below).
 
 @item initialize
 Initialize the allocated space with blank values before writing in the
-constant values. This can be useful when the tiles don't cover the full
+constant values. This can be useful when the tiles do not cover the full
 allocated block.
 @end table
 @end deftypefun
@@ -33575,7 +33568,7 @@ only its first element will be used and the size of 
this dataset is
 irrelevant.
 
 When @code{OTHER} is a block of memory, it has to have the same size as the
-allocated block of @code{IN}. When its a tile, it has to have the same size
+allocated block of @code{IN}. When it s a tile, it has to have the same size
 as @code{IN}.
 
 @item PARSE_OTHER (int)
@@ -33606,10 +33599,10 @@ any other variable you define before this macro to 
process the input and/or
 the other.
 
 All variables within this function-like macro begin with @code{tpo_} except
-for the three variables listed below. Therefore, as long as you don't start
+for the three variables listed below. Therefore, as long as you do not start
 the names of your variables with this prefix everything will be fine. Note
 that @code{i} (and possibly @code{o}) will be incremented once by this
-function-like macro, so don't increment them within @code{OP}.
+function-like macro, so do not increment them within @code{OP}.
 
 @table @code
 @item i
@@ -33754,7 +33747,7 @@ So it must have the same number of elements as the 
dimensions of @code{input} (o
 
 @item remainderfrac
 The significant fraction of the remainder space to see if it should be split 
into two and put on both sides of a dimension or not.
-This is thus only relevant @code{input} length along a dimension isn't an 
exact multiple of the regular tile size along that dimension.
+This is thus only relevant @code{input} length along a dimension is not an 
exact multiple of the regular tile size along that dimension.
 See @ref{Tessellation} for a more thorough discussion.
 
 @item out
@@ -33811,7 +33804,7 @@ gal_tile_full_two_layers(input, &tl);
 @deftypefun void gal_tile_full_permutation (struct gal_tile_two_layer_params 
@code{*tl})
 Make a permutation to allow the conversion of tile location in memory to its 
location in the full input dataset and put it in @code{tl->permutation}.
 If a permutation has already been defined for the tessellation, this function 
will not do anything.
-If permutation won't be necessary (there is only one channel or one 
dimension), then this function will not do anything (@code{tl->permutation} 
must have been initialized to @code{NULL}).
+If permutation will not be necessary (there is only one channel or one 
dimension), then this function will not do anything (@code{tl->permutation} 
must have been initialized to @code{NULL}).
 
 When there is only one channel OR one dimension, the tiles are allocated in 
memory in the same order that they represent the input data.
 However, to make channel-independent processing possible in a generic way, the 
tiles of each channel are allocated contiguously.
@@ -33940,10 +33933,10 @@ Modify the input first and last pixels (@code{fpixel} 
and @code{lpixel}, that yo
 
 @deftypefun int gal_box_overlap (long @code{*naxes}, long @code{*fpixel_i}, 
long @code{*lpixel_i}, long @code{*fpixel_o}, long @code{*lpixel_o}, size_t 
@code{ndim})
 An @code{ndim}-dimensional dataset of size @code{naxes} (along each dimension, 
in FITS order) and a box with first and last (inclusive) coordinate of 
@code{fpixel_i} and @code{lpixel_i} is given.
-This box doesn't necessarily have to lie within the dataset, it can be outside 
of it, or only partially overlap.
+This box does not necessarily have to lie within the dataset, it can be 
outside of it, or only partially overlap.
 This function will change the values of @code{fpixel_i} and @code{lpixel_i} to 
exactly cover the overlap in the input dataset's coordinates.
 
-This function will return 1 if there is an overlap and 0 if there isn't.
+This function will return 1 if there is an overlap and 0 if there is not.
 When there is an overlap, the coordinates of the first and last pixels of the 
overlap will be put in @code{fpixel_o} and @code{lpixel_o}.
 @end deftypefun
 
@@ -33991,7 +33984,7 @@ For example we will take @code{A} as the maximum of 
@code{A} and @code{B} when @
 @end deffn
 
 @deftypefun void gal_polygon_vertices_sort_convex (double @code{*in}, size_t 
@code{n}, size_t @code{*ordinds})
-We have a simple polygon (that can result from projection, so its edges don't 
collide or it doesn't have holes) and we want to order its corners in an 
anticlockwise fashion.
+We have a simple polygon (that can result from projection, so its edges do not 
collide or it does not have holes) and we want to order its corners in an 
anticlockwise fashion.
 This is necessary for clipping it and finding its area later.
 The input vertices can have practically any order.
 
@@ -34017,11 +34010,11 @@ Such that calling the input coordinates in the 
following fashion will give an an
 @cindex Convex Hull
 @noindent
 The implementation of this is very similar to the Graham scan in finding the 
Convex Hull.
-However, in projection we will never have a concave polygon (the left 
condition below, where this algorithm will get to E before D), we will always 
have a convex polygon (right case) or E won't exist!
-This is because we are always going to be calculating the area of the overlap 
between a quadrilateral and the pixel grid or the quadrilateral its self.
+However, in projection we will never have a concave polygon (the left 
condition below, where this algorithm will get to E before D), we will always 
have a convex polygon (right case) or E will not exist!
+This is because we are always going to be calculating the area of the overlap 
between a quadrilateral and the pixel grid or the quadrilateral itself.
 
 The @code{GAL_POLYGON_MAX_CORNERS} macro is defined so there will be no need 
to allocate these temporary arrays separately.
-Since we are dealing with pixels, the polygon can't really have too many 
vertices.
+Since we are dealing with pixels, the polygon cannot really have too many 
vertices.
 @end deftypefun
 
 @deftypefun int gal_polygon_is_convex (double @code{*v}, size_t @code{n})
@@ -34062,7 +34055,7 @@ Note that the polygon vertices have to be sorted before 
calling this function.
 
 @deftypefun int gal_polygon_to_counterclockwise (double @code{*v}, size_t 
@code{n})
 Arrange the vertices of the sorted polygon in place, to be in a 
counter-clockwise direction.
-If the input polygon already has a counter-clockwise direction it won't touch 
the input.
+If the input polygon already has a counter-clockwise direction it will not 
touch the input.
 The return value is @code{1} on successful execution.
 This function is just a wrapper over @code{gal_polygon_is_counterclockwise}, 
and will reverse the order of the vertices when necessary.
 @end deftypefun
@@ -34382,11 +34375,11 @@ astfits kd-tree.fits -h1
 Returns the index of the nearest input point to the query point (@code{point}, 
assumed to be an array with same number of elements as @code{gal_data_t}s in 
@code{coords_raw}).
 The distance between the query point and its nearest neighbor is stored in the 
space that @code{least_dist} points to.
 This search is efficient due to the constant checking for the presence of 
possible best points in other branches.
-If it isn't possible for the other branch to have a better nearest neighbor, 
that branch is not searched.
+If it is not possible for the other branch to have a better nearest neighbor, 
that branch is not searched.
 
 As an example, let's use the k-d tree that was created in the example of 
@code{gal_kdtree_create} (above) and find the nearest row to a given coordinate 
(@code{point}).
-This will be a very common scenario, especially in large and multi-dimensional 
datasets where the k-d tree creation can take long and you don't want to 
re-create the k-d tree every time.
-In the @code{gal_kdtree_create} example output, we also wrote the k-d tree 
root index as a FITS keyword (@code{KDTROOT}), so after loading the two table 
data (input coordinates and k-d tree), we'll read the root from the FITS 
keyword.
+This will be a very common scenario, especially in large and multi-dimensional 
datasets where the k-d tree creation can take long and you do not want to 
re-create the k-d tree every time.
+In the @code{gal_kdtree_create} example output, we also wrote the k-d tree 
root index as a FITS keyword (@code{KDTROOT}), so after loading the two table 
data (input coordinates and k-d tree), we will read the root from the FITS 
keyword.
 This is a very simple example, but the scalability is clear: for example it is 
trivial to parallelize (see @ref{Library demo - multi-threaded operation}).
 
 @example
@@ -34425,7 +34418,7 @@ main (void)
   keysll[0].name="KDTROOT";
   keysll[0].type=GAL_TYPE_SIZE_T;
   gal_fits_key_read(kdtreefile, kdtreehdu, keysll, 0, 0);
-  keysll[0].name=NULL; /* Since we didn't allocate it. */
+  keysll[0].name=NULL; /* Since we did not allocate it. */
   rkey=gal_data_copy_to_new_type(&keysll[0], GAL_TYPE_SIZE_T);
   root=((size_t *)(rkey->array))[0];
 
@@ -34471,7 +34464,7 @@ inverse:    AFTER[ perm[i] ] = BEFORE[ i       ]     i 
= 0 .. N-1
 
 @cindex GNU Scientific Library
 The functions here are a re-implementation of the GNU Scientific Library's
-@code{gsl_permute} function. The reason we didn't use that function was
+@code{gsl_permute} function. The reason we did not use that function was
 that it uses system-specific types (like @code{long} and @code{int}) which
 can have different widths on different systems, hence are not easily
 convertible to Gnuastro's fixed width types (see @ref{Numeric data
@@ -34542,7 +34535,7 @@ The first two (@code{ret} and @code{ret->next}) are 
permutations.
 In other words, the @code{array} elements of both have a type of 
@code{size_t}, see @ref{Permutations}.
 The third node (@code{ret->next->next}) is the calculated distance for that 
match and its array has a type of @code{double}.
 The number of matches will be put in the space pointed by the 
@code{nummatched} argument.
-If there wasn't any match, this function will return @code{NULL}.
+If there was not any match, this function will return @code{NULL}.
 
 The two permutations can be applied to the rows of the two inputs: the first 
one (@code{ret}) should be applied to the rows of the table containing 
@code{coord1} and the second one (@code{ret->next}) to the table containing 
@code{coord2}.
 After applying the returned permutations to the inputs, the top 
@code{nummatched} elements of both will match with each other.
@@ -34556,14 +34549,14 @@ The matching aperture is defined by the 
@code{aperture} array that is an input a
 
 If several points of one catalog lie within this aperture of a point in the 
other catalog, the  nearest is defined as the match.
 In a 2D situation (where the input lists have two nodes), for the most generic 
case, @code{aperture} must have three elements: the major axis length, axis 
ratio and position angle (see @ref{Defining an ellipse and ellipsoid}).
-If @code{aperture[1]==1}, the aperture will be a circle of radius 
@code{aperture[0]} and the third value won't be used.
+If @code{aperture[1]==1}, the aperture will be a circle of radius 
@code{aperture[0]} and the third value will not be used.
 When the aperture is an ellipse, distances between the points are also 
calculated in the respective elliptical distances (@mymath{r_{el}} in 
@ref{Defining an ellipse and ellipsoid}).
 
 @strong{Output permutations ignore internal sorting}: the output permutations 
will correspond to the initial inputs.
 Therefore, even when @code{inplace!=0} (and this function re-arranges the 
inputs in place), the output permutation will correspond to original (possibly 
non-sorted) inputs. The reason for this is that you rarely want to permute the 
actual positional columns after the match.
 Usually, you also have other columns (for example the brightness, morphology 
and etc) and you want to find how they differ between the objects that match.
 Once you have the permutations, they can be applied to those other columns 
(see @ref{Permutations}) and the higher-level processing can continue.
-So if you don't need the coordinate columns for the rest of your analysis, it 
is better to set @code{inplace=1}.
+So if you do not need the coordinate columns for the rest of your analysis, it 
is better to set @code{inplace=1}.
 
 @deftypefun {gal_data_t *} gal_match_sort_based (gal_data_t @code{*coord1}, 
gal_data_t @code{*coord2}, double @code{*aperture}, int @code{sorted_by_first}, 
int @code{inplace}, size_t @code{minmapsize}, int @code{quietmmap}, size_t 
@code{*nummatched})
 
@@ -34690,7 +34683,7 @@ values, for better performance (and less memory usage), 
you can give a
 non-zero value to the @code{inplace} argument. In this case, the sorting
 and removal of blank elements will be done directly on the input
 dataset. However, after this function the original dataset may have changed
-(if it wasn't sorted or had blank values).
+(if it was not sorted or had blank values).
 @end deftypefun
 
 @cindex Quantile
@@ -34779,7 +34772,7 @@ only one element. This function will only look into the 
dataset if the
 @code{GAL_DATA_FLAG_SORT_CH} bit of @code{input->flag} is @code{0}, see
 @ref{Generic data container}.
 
-When the flags don't indicate a previous check @emph{and}
+When the flags do not indicate a previous check @emph{and}
 @code{updateflags} is non-zero, this function will set the flags
 appropriately to avoid having to re-check the dataset in future calls (this
 can be very useful when repeated checks are necessary). When
@@ -34861,8 +34854,8 @@ The number of bins: must be larger than 0.
 
 @item onebinstart
 A desired value to start one bin.
-Note that with this option, the bins won't start and end exactly on the given 
range values, it will be slightly shifted to accommodate this request (enough 
for the bin containing the value to start at it).
-If you don't have any preference on where to start a bin, set this to NAN.
+Note that with this option, the bins will not start and end exactly on the 
given range values, it will be slightly shifted to accommodate this request 
(enough for the bin containing the value to start at it).
+If you do not have any preference on where to start a bin, set this to NAN.
 @end table
 @end deftypefun
 
@@ -34935,7 +34928,7 @@ array[2]: Mean.
 array[3]: Standard deviation.
 @end example
 
-If the @mymath{\sigma}-clipping doesn't converge or all input elements are
+If the @mymath{\sigma}-clipping does not converge or all input elements are
 blank, then this function will return NaN values for all the elements
 above.
 @end deftypefun
@@ -35081,7 +35074,7 @@ values of 1 or 0 (no other pixel will be touched). 
However, in some cases,
 it is necessary to put temporary values in each element during the
 processing of the functions. This temporary value has a special meaning for
 the operation and will be operated on. So if your input datasets have
-values other than 0 and 1 that you don't want these functions to work on,
+values other than 0 and 1 that you do not want these functions to work on,
 be sure they are not equal to this macro's value. Note that this value is
 also different from @code{GAL_BLANK_UINT8}, so your input datasets may also
 contain blank elements.
@@ -35383,7 +35376,7 @@ values}. Once the flag has been set, no other function 
(including this one)
 that needs special behavior for blank pixels will have to parse the dataset
 to see if it has blank values any more.
 
-If you are sure your dataset doesn't have blank values (by the design of
+If you are sure your dataset does not have blank values (by the design of
 your software), to avoid an extra parsing of the dataset and improve
 performance, you can set the two bits manually (see the description of
 @code{flags} in @ref{Generic data container}):
@@ -35459,9 +35452,9 @@ The pixels (position in @code{indexs}, values in 
@code{labels}) that must be ``g
 For a demonstration see Columns 2 and 3 of Figure 10 in 
@url{http://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa [2015]}.
 
 In many aspects, this function is very similar to over-segmentation (watershed 
algorithm, @code{gal_label_watershed}).
-The big difference is that in over-segmentation local maximums (that aren't 
touching any alreadylabeled pixel) get a separate label.
+The big difference is that in over-segmentation local maximums (that are not 
touching any alreadylabeled pixel) get a separate label.
 However, here the final number of labels will not change.
-All pixels that aren't directly touching a labeled pixel just get pushed back 
to the start of the loop, and the loop iterates until its size doesn't change 
any more.
+All pixels that are not directly touching a labeled pixel just get pushed back 
to the start of the loop, and the loop iterates until its size does not change 
any more.
 This is because in a generic scenario some of the indexed pixels might not be 
reachable through other indexed pixels.
 
 The next major difference with over-segmentation is that when there is only 
one label in growth region(s), it is not mandatory for @code{indexs} to be 
sorted by values.
@@ -35470,16 +35463,16 @@ If there are multiple labeled regions in growth 
region(s), then values are impor
 This function looks for positive-valued neighbors of each pixel in 
@code{indexs} and will label a pixel if it touches one.
 Therefore, it is very important that only pixels/labels that are intended for 
growth have positive values in @code{labels} before calling this function.
 Any non-positive (zero or negative) value will be ignored as a label by this 
function.
-Thus, it is recommended that while filling in the @code{indexs} array values, 
you initialize all the pixels that are in @code{indexs} with 
@code{GAL_LABEL_INIT}, and set non-labeled pixels that you don't want to grow 
to @code{0}.
+Thus, it is recommended that while filling in the @code{indexs} array values, 
you initialize all the pixels that are in @code{indexs} with 
@code{GAL_LABEL_INIT}, and set non-labeled pixels that you do not want to grow 
to @code{0}.
 
 This function will write into both the input datasets.
 After this function, some of the non-positive @code{labels} pixels will have a 
new positivelabel and the number of useful elements in @code{indexs} will have 
decreased.
-The index of those pixels that couldn't be labeled will remain inside 
@code{indexs}.
+The index of those pixels that could not be labeled will remain inside 
@code{indexs}.
 If @code{withrivers} is non-zero, then pixels that are immediately touching 
more than one positive value will be given a @code{GAL_LABEL_RIVER} label.
 
 @cindex GNU C library
 Note that the @code{indexs->array} is not re-allocated to its new size at the 
end@footnote{Note that according to the GNU C Library, even a @code{realloc} to 
a smaller size can also cause a re-write of the whole array, which is not a 
cheap operation.}.
-But since @code{indexs->dsize[0]} and @code{indexs->size} have new values 
after this function is returned, the extra elements just won't be used until 
they are ultimately freed by @code{gal_data_free}.
+But since @code{indexs->dsize[0]} and @code{indexs->size} have new values 
after this function is returned, the extra elements just will not be used until 
they are ultimately freed by @code{gal_data_free}.
 
 Connectivity is a value between @code{1} (fewest number of neighbors) and the 
number of dimensions in the input (most number of neighbors).
 For example in a 2D dataset, a connectivity of @code{1} and @code{2} 
corresponds to 4-connected and 8-connected neighbors.
@@ -35496,7 +35489,7 @@ presented there, we will directly skip onto the 
currently available
 convolution functions in Gnuastro's library.
 
 As of this version, only spatial domain convolution is available in
-Gnuastro's libraries. We haven't had the time to liberate the frequency
+Gnuastro's libraries. We have not had the time to liberate the frequency
 domain function convolution and de-convolution functions that are available
 in the Convolve program@footnote{Hence any help would be greatly
 appreciated.}.
@@ -35563,7 +35556,7 @@ Your contributions would be very welcome and 
appreciated.
 @deffnx Macro GAL_INTERPOLATE_NEIGHBORS_METRIC_INVALID
 The metric used to find distance for nearest neighbor interpolation.
 A radial metric uses the simple Euclidean function to find the distance 
between two pixels.
-A manhattan metric will always be an integer and is like steps (but is also 
much faster to calculate than radial metric because it doesn't need a square 
root calculation).
+A manhattan metric will always be an integer and is like steps (but is also 
much faster to calculate than radial metric because it does not need a square 
root calculation).
 @end deffn
 
 @deffn Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_MIN
@@ -35572,7 +35565,7 @@ A manhattan metric will always be an integer and is 
like steps (but is also much
 @deffnx Macro GAL_INTERPOLATE_NEIGHBORS_FUNC_INVALID
 @cindex Saturated stars
 The various types of nearest-neighbor interpolation functions for 
@code{gal_interpolate_neighbors}.
-The names are descriptive for the operation they do, so we won't go into much 
more detail here.
+The names are descriptive for the operation they do, so we will not go into 
much more detail here.
 The median operator will be one of the most used, but operators like the 
maximum are good to fill the center of saturated stars.
 @end deffn
 
@@ -35589,7 +35582,7 @@ This function is multi-threaded and will run on 
@code{numthreads} threads (see @
 @code{tl} is Gnuastro's tessellation structure used to define tiles over an 
image and is fully described in @ref{Tile grid}.
 When @code{tl!=NULL}, then it is assumed that the @code{input->array} contains 
one value per tile and interpolation will respect certain tessellation 
properties, for example to not interpolate over channel borders.
 
-If several datasets have the same set of blank values, you don't need to call 
this function multiple times.
+If several datasets have the same set of blank values, you do not need to call 
this function multiple times.
 When @code{aslinkedlist} is non-zero, then @code{input} will be seen as a 
@ref{List of gal_data_t}.
 In this case, the same neighbors will be used for all the datasets in the list.
 Of course, the values for each dataset will be different, so a different value 
will be written in each dataset, but the neighbor checking that is the most CPU 
intensive part will only be done once.
@@ -35597,7 +35590,7 @@ Of course, the values for each dataset will be 
different, so a different value w
 This is a non-parametric and robust function for interpolation.
 The interpolated values are also always within the range of the non-blank 
values and strong outliers do not get created.
 However, this type of interpolation must be used with care when there are 
gradients.
-This is because it is non-parametric and if there aren't enough neighbors, 
step-like features can be created.
+This is because it is non-parametric and if there are not enough neighbors, 
step-like features can be created.
 @end deftypefun
 
 @deffn Macro GAL_INTERPOLATE_1D_INVALID
@@ -35833,7 +35826,7 @@ useful during development, for example see 
@code{COMMIT} keyword in
 the existence of libgit2, and store the value in the
 @code{GAL_CONFIG_HAVE_LIBGIT2}, see @ref{Configuration information} and
 @ref{Optional dependencies}. @file{gnuastro/git.h} includes
-@file{gnuastro/config.h} internally, so you won't have to include both for
+@file{gnuastro/config.h} internally, so you will not have to include both for
 this macro.
 
 @deftypefun {char *} gal_git_describe ( )
@@ -36029,7 +36022,7 @@ All these functions are declared in 
@file{gnuastro/spectra.h}.
 @deffnx Macro GAL_SPECLINES_INVALID_MAX
 Internal values/identifiers for specific spectral lines as is clear from
 their names.
-Note the first and last one, they can be used when parsing the lines 
automatically: both don't correspond to any line, but their integer values 
correspond to the two integers just before and after the first and last line 
identifier.
+Note the first and last one, they can be used when parsing the lines 
automatically: both do not correspond to any line, but their integer values 
correspond to the two integers just before and after the first and last line 
identifier.
 
 @code{GAL_SPECLINES_INVALID} has a value of zero, and allows you to have a 
fixed integer which never corresponds to a line.
 @code{GAL_SPECLINES_INVALID_MAX} is the total number of pre-defined lines, 
plus one.
@@ -36249,7 +36242,7 @@ In this final section of @ref{Library}, we give some 
example Gnuastro
 programs to demonstrate various features in the library. All these programs
 have been tested and once Gnuastro is installed you can compile and run
 them with Gnuastro's @ref{BuildProgram} program that will take care of
-linking issues. If you don't have any FITS file to experiment on, you can
+linking issues. If you do not have any FITS file to experiment on, you can
 use those that are generated by Gnuastro after @command{make check} in the
 @file{tests/} directory, see @ref{Quick start}.
 
@@ -36272,7 +36265,7 @@ and @code{hdu} variable values to specify an existing 
FITS file and/or
 extension/HDU.
 
 This is just intended to demonstrate how to use the @code{array} pointer of
-@code{gal_data_t}. Hence it doesn't do important sanity checks, for example
+@code{gal_data_t}. Hence it does not do important sanity checks, for example
 in real datasets you may also have blank pixels. In such cases, this
 program will return a NaN value (see @ref{Blank pixels}). So for general
 statistical information of a dataset, it is much better to use Gnuastro's
@@ -36387,7 +36380,7 @@ It is intentionally chosen to put more focus on the 
important steps in spinning
 For example, instead of an array of pixels, you can define an array of tiles 
or any other context-specific structures as separate targets.
 The important thing is that each action should have its own unique ID 
(counting from zero, as is done in an array in C).
 You can then follow the process below and use each thread to work on all the 
targets that are assigned to it.
-Recall that spinning-off threads is its self an expensive process and we don't 
want to spin-off one thread for each target (see the description of 
@code{gal_threads_dist_in_threads} in @ref{Gnuastro's thread related functions}.
+Recall that spinning-off threads is itself an expensive process and we do not 
want to spin-off one thread for each target (see the description of 
@code{gal_threads_dist_in_threads} in @ref{Gnuastro's thread related functions}.
 
 There are many (more complicated, real-world) examples of using 
@code{gal_threads_spin_off} in Gnuastro's actual source code, you can see them 
by searching for the @code{gal_threads_spin_off} function from the top source 
(after unpacking the tarball) directory (for example with this command):
 
@@ -36526,7 +36519,7 @@ column selection is discussed in @ref{Selecting table 
columns}. The first
 and second columns can be any type, but this program will convert them to
 @code{int32_t} and @code{float} for its internal usage
 respectively. However, the third column must be double for this program. So
-if it isn't, the program will abort with an error. Having the columns in
+if it is not, the program will abort with an error. Having the columns in
 memory, it will print them out along with their sum (just a simple
 application, you can do what ever you want at this stage). Reading the
 table finishes here.
@@ -36543,7 +36536,7 @@ is then done through a simple call to 
@code{gal_table_write}.
 The operations that are shown in this example program are not necessary all
 the time. For example, in many cases, you know the numerical data type of
 the column before writing your program (see @ref{Numeric data types}), so
-type checking and copying to a specific type won't be necessary.
+type checking and copying to a specific type will not be necessary.
 
 @example
 #include <stdio.h>
@@ -36579,8 +36572,8 @@ main(void)
   columns = gal_table_read(inname, hdu, column_ids,
                            GAL_TABLE_SEARCH_NAME, 1, -1, 1, NULL);
 
-  /* Go over the columns, we'll assume that you don't know their type
-   * a-priori, so we'll check  */
+  /* Go over the columns, we will assume that you do not know their type
+   * a-priori, so we will check  */
   counter=1;
   for(col=columns; col!=NULL; col=col->next)
     switch(counter++)
@@ -36606,7 +36599,7 @@ main(void)
         break;
       @}
 
-  /* As an example application we'll just print them out. In the
+  /* As an example application we will just print them out. In the
    * meantime (just for a simple demonstration), change the first
    * array value to the counter and multiply the second by 10. */
   for(i=0;i<c1->size;++i)
@@ -36627,15 +36620,15 @@ main(void)
   gal_table_write(c1, NULL, NULL, GAL_TABLE_FORMAT_BFITS, outname,
                   "MY-COLUMNS", 0);
 
-  /* The names weren't allocated, so to avoid cleaning-up problems,
-   * we'll set them to NULL. */
+  /* The names were not allocated, so to avoid cleaning-up problems,
+   * we will set them to NULL. */
   c1->name = c2->name = NULL;
 
   /* Clean up and return.  */
   gal_data_free(c1);
   gal_data_free(c2);
   gal_list_data_free(columns);
-  gal_list_str_free(column_ids, 0); /* strings weren't allocated. */
+  gal_list_str_free(column_ids, 0); /* strings were not allocated. */
   return EXIT_SUCCESS;
 @}
 @end example
@@ -36746,8 +36739,8 @@ program, that part is all they have to understand.
 
 Recently it is also becoming common to write scientific software in Python,
 or a combination of it with C or C++. Python is a high level scripting
-language which doesn't need compilation. It is very useful when you want to
-do something on the go and don't want to be halted by the troubles of
+language which does not need compilation. It is very useful when you want to
+do something on the go and do not want to be halted by the troubles of
 compiling, linking, memory checking, etc. When the datasets are small and
 the job is temporary, this ability of Python is great and is highly
 encouraged. A very good example might be plotting, in which Python is
@@ -36755,7 +36748,7 @@ undoubtedly one of the best.
 
 But as the data sets increase in size and the processing becomes more
 complicated, the speed of Python scripts significantly decrease. So when
-the program doesn't change too often and is widely used in a large
+the program does not change too often and is widely used in a large
 community, mostly on large data sets (like astronomical images), using
 Python will waste a lot of valuable research-hours. It is possible to wrap
 C or C++ functions with Python to fix the speed issue. But this creates
@@ -36932,7 +36925,7 @@ easier to understand. This is most important for 
low-level functions (which
 usually define a lot of variables). Low-level functions do most of the
 processing, they will also be the most interesting part of a program for an
 inquiring astronomer. This convention is less important for higher level
-functions that don't define too many variables and whose only purpose is to
+functions that do not define too many variables and whose only purpose is to
 run the lower-level functions in a specific order and with checks.
 
 @cindex Optimization flag
@@ -36941,7 +36934,7 @@ run the lower-level functions in a specific order and 
with checks.
 In general you can be very liberal in breaking up the functions into
 smaller parts, the GNU Compiler Collection (GCC) will automatically compile
 the functions as inline functions when the optimizations are turned on. So
-you don't have to worry about decreasing the speed. By default Gnuastro
+you do not have to worry about decreasing the speed. By default Gnuastro
 will compile with the @option{-O3} optimization flag.
 
 @cindex Buffers (Emacs)
@@ -36997,7 +36990,7 @@ groups:
 @code{#include <config.h>}: This must be the first code line (not commented
 or blank) in each source file @emph{within Gnuastro}. It sets macros that
 the GNU Portability Library (Gnulib) will use for a unified environment
-(GNU C Library), even when the user is building on a system that doesn't
+(GNU C Library), even when the user is building on a system that does not
 use the GNU C library.
 
 @item
@@ -37048,7 +37041,7 @@ user) source files with the same line
 @end example
 Note that the GSL convention for header file names is
 @file{gsl_specialname.h}, so your include directive for a GSL header must
-be something like @code{#include <gsl/gsl_specialname.h>}. Gnuastro doesn't
+be something like @code{#include <gsl/gsl_specialname.h>}. Gnuastro does not
 follow this GSL guideline because of the repeated @code{gsl} in the include
 directive. It can be confusing and cause bugs for beginners. All Gnuastro
 (and GSL) headers must be located within a unique directory and will not be
@@ -37220,7 +37213,7 @@ these files are enough. Larger programs will need more 
files and developers
 are encouraged to define any number of new files. It is just important that
 the following list of files exist and do what is described here. When
 creating other source files, please choose filenames that are a complete
-single word: don't abbreviate (abbreviations are cryptic). For a minimal
+single word: do not abbreviate (abbreviations are cryptic). For a minimal
 program containing all these files, see @ref{The TEMPLATE program}.
 
 @vtable @file
@@ -37285,7 +37278,7 @@ The character that is defined here is the option's 
short option name.
 The list of available alphabet characters can be seen in the comments.
 Recall that some common options also take some characters, for those, see 
@file{lib/gnuastro-internal/options.h}.
 
-The second group of options are those that don't have a short option 
alternative.
+The second group of options are those that do not have a short option 
alternative.
 Only the first in this group needs a value (@code{1000}), the rest will be 
given a value by C's @code{enum} definition, so the actual value is irrelevant 
and must never be used, always use the name.
 
 @item ui.c
@@ -37361,7 +37354,7 @@ operations.
 To make it easier to learn/apply the internal program infra-structure
 discussed in @ref{Mandatory source code files}, in the @ref{Version
 controlled source}, Gnuastro ships with a template program . This template
-program is not available in the Gnuastro tarball so it doesn't confuse
+program is not available in the Gnuastro tarball so it does not confuse
 people using the tarball. The @file{bin/TEMPLATE} directory in Gnuastro's
 Git repository contains the bare-minimum files necessary to define a new
 program and all the basic/necessary files/functions are pre-defined
@@ -37387,7 +37380,7 @@ copyright notice at their top. Open all the files and 
correct these
 notices: 1) The first line contains a single-line description of the
 program. 2) In the second line only the name or your program needs to be
 fixed and 3) Add your name and email as a ``Contributing author''. As your
-program grows, you will need to add new files, don't forget to add this
+program grows, you will need to add new files, do not forget to add this
 notice in those new files too, just put your name and email under
 ``Original author'' and correct the copyright years.
 
@@ -37403,7 +37396,7 @@ new program. Then correct the names of the copied 
program to your new
 program name. There are multiple places where this has to be done, so be
 patient and go down to the bottom of the file. Ultimately add
 @file{bin/myprog/Makefile} to @code{AC_CONFIG_FILES}, only here the
-ordering depends on the length of the name (it isn't alphabetical).
+ordering depends on the length of the name (it is not alphabetical).
 
 @item
 Open @file{Makefile.am} in the top Gnuastro source. Similar to the previous
@@ -37417,7 +37410,7 @@ Open @file{doc/Makefile.am} and similar to 
@file{Makefile.am} (above), add
 the proper entries for the man-page of your program to be created (here,
 the variable that keeps all the man-pages to be created is
 @code{dist_man_MANS}). Then scroll down and add a rule to build the
-man-page similar to the other existing rules (in alphabetical order). Don't
+man-page similar to the other existing rules (in alphabetical order). Do Not
 forget to add a short one-line description here, it will be displayed on
 top of the man-page.
 
@@ -37455,7 +37448,7 @@ $ make
 
 @item
 You are done! You can now start customizing your new program to do your
-special processing. When its complete, just don't forget to add checks
+special processing. When it is complete, just do not forget to add checks
 also, so it can be tested at least once on a user's system with
 @command{make check}, see @ref{Test scripts}. Finally, if you would like to
 share it with all Gnuastro users, inform us so we merge it into Gnuastro's
@@ -37499,7 +37492,7 @@ These first few paragraphs explain the purposes of the 
program or library
 and are fundamental to Gnuastro. Before actually starting to code, explain
 your idea's purpose thoroughly in the start of the respective/new section
 you wish to work on. While actually writing its purpose for a new reader,
-you will probably get some valuable and interesting ideas that you hadn't
+you will probably get some valuable and interesting ideas that you had not
 thought of before. This has occurred several times during the creation of
 Gnuastro.
 
@@ -37515,15 +37508,15 @@ subsections prior to the `Invoking Programname' 
subsection. If you are
 writing a new program or your addition to an existing program involves a
 new concept, also include such subsections and explain the concepts so a
 person completely unfamiliar with the concepts can get a general initial
-understanding. You don't have to go deep into the details, just enough to
+understanding. You do not have to go deep into the details, just enough to
 get an interested person (with absolutely no background) started with some
 good pointers/links to where they can continue studying if they are more
-interested. If you feel you can't do that, then you have probably not
-understood the concept yourself. If you feel you don't have the time, then
+interested. If you feel you cannot do that, then you have probably not
+understood the concept yourself. If you feel you do not have the time, then
 think about yourself as the reader in one year: you will forget almost all
 the details, so now that you have done all the theoretical preparations,
 add a few more hours and document it. Therefore in one year, when you find
-a bug or want to add a new feature, you don't have to prepare as much. Have
+a bug or want to add a new feature, you do not have to prepare as much. Have
 in mind that your only limitation in length is the fatigue of the reader
 after reading a long text, nothing else. So as long as you keep it
 relevant/interesting for the reader, there is no page number limit/cost.
@@ -37538,13 +37531,13 @@ After you have finished adding your initial intended 
plan to the book, then
 start coding your change or new program within the Gnuastro source
 files. While you are coding, you will notice that somethings should be
 different from what you wrote in the book (your initial plan). So correct
-them as you are actually coding, but don't worry too much about missing a
+them as you are actually coding, but do not worry too much about missing a
 few things (see the next step).
 
 @item
 After your work has been fully implemented, read the section documentation
-from the start and see if you didn't miss any change in the coding and to
-see if the context is fairly continuous for a first time reader (who hasn't
+from the start and see if you did not miss any change in the coding and to
+see if the context is fairly continuous for a first time reader (who has not
 seen the book or had known Gnuastro before you made your change).
 
 @item
@@ -37589,7 +37582,7 @@ your work).
 This script is designed to be run each time you make a change and want to
 test your work (with some possible input and output). The script itself is
 heavily commented and thoroughly describes the best way to use it, so we
-won't repeat it here.
+will not repeat it here.
 
 As a short summary: you specify the build directory, an output directory
 (for the built program to be run in, and also contains the inputs), the
@@ -37640,7 +37633,7 @@ tests are placed in the @file{tests/} directory. The
 will copy all the configuration files from the various directories to a
 @file{tests/.gnuastro} directory (which it will make) so the various tests
 can set the default values. This script will also make sure the programs
-don't go searching for user and system wide configuration files to avoid
+do not go searching for user and system wide configuration files to avoid
 the mixing of values with different Gnuastro version on the system.
 
 For each program, the tests are placed inside directories with the
@@ -37663,20 +37656,20 @@ as the ``source tree'')@footnote{The 
@file{developer-build} script also
 uses this feature to keep the source and build directories separate (see
 @ref{Separate build and source directories}).}. This distinction is made by
 GNU Autoconf and Automake (which configure, build and install Gnuastro) so
-that you can install the program even if you don't have write access to the
+that you can install the program even if you do not have write access to the
 directory keeping the source files. See the ``Parallel build trees (a.k.a
 VPATH builds)'' in the Automake manual for a nice explanation.
 
 Because of this, any necessary inputs that are distributed in the
 tarball@footnote{In many cases, the inputs of a test are outputs of
-previous tests, this doesn't apply to this class of inputs. Because all
+previous tests, this does not apply to this class of inputs. Because all
 outputs of previous tests are in the ``build tree''.}, for example the
 catalogs necessary for checks in MakeProfiles and Crop, must be identified
 with the @command{$topsrc} prefix instead of @command{../} (for the top
 source directory that is unpacked). This @command{$topsrc} variable points
 to the source tree where the script can find the source data (it is defined
 in @file{tests/Makefile.am}). The executables and other test products were
-built in the build tree (where they are being run), so they don't need to
+built in the build tree (where they are being run), so they do not need to
 be prefixed with that variable. This is also true for images or files that
 were produced by other tests.
 
@@ -37762,7 +37755,7 @@ $ myprogram foo
 
 @noindent
 However, nothing will happen if you type @samp{b} and press @key{[TAB]} only 
@i{once}.
-This is because of the ambiguity: there isn't enough information to figure out 
which suggestion you want: @code{bar} or @code{bAr}?
+This is because of the ambiguity: there is not enough information to figure 
out which suggestion you want: @code{bar} or @code{bAr}?
 So, if you press @key{[TAB]} twice, it will print out all the options that 
start with @samp{b}:
 
 @example
@@ -37860,7 +37853,7 @@ Since the @option{--help} option will list all the 
options available in Gnuastro
 Note that the actual TAB completion in Gnuastro is a little more complex than 
this and fully described in @ref{Implementing TAB completion in Gnuastro}.
 But this is a good exercise to get started.
 
-We'll use @command{asttable} as the demo, and the goal is to suggest all 
options that this program has to offer.
+We Will use @command{asttable} as the demo, and the goal is to suggest all 
options that this program has to offer.
 You can print all of them (with a lot of extra information) with this command:
 
 @example
@@ -37903,7 +37896,7 @@ So it would be nice if we could help them and print 
only a list of FITS files si
 But there's a catch.
 When splitting the user's input line, Bash will consider @samp{=} as a 
separate word.
 To avoid getting caught in changing the @command{IFS} or @command{WORDBREAKS} 
values, we will simply check for @samp{=} and act accordingly.
-That is, if the previous word is a @samp{=}, we'll ignore it and take the word 
before that as the previous word.
+That is, if the previous word is a @samp{=}, we will ignore it and take the 
word before that as the previous word.
 Also, if the current word is a @samp{=}, ignore it completely.
 Taking all of that into consideration, the code below might serve well:
 
@@ -37953,7 +37946,7 @@ That is important because the long format of an option, 
its value is more clear
 
 You have now written a very basic and working TAB completion script that can 
easily be generalized to include more options (and be good for a single/simple 
program).
 However, Gnuastro has many programs that share many similar things and the 
options are not independent.
-Also, complex situations do often come up: for example some people use a 
@file{.fit} suffix for FITS files and others don't even use a suffix at all!
+Also, complex situations do often come up: for example some people use a 
@file{.fit} suffix for FITS files and others do not even use a suffix at all!
 So in practice, things need to get a little more complicated, but the core 
concept is what you learnt in this section.
 We just modularize the process (breaking logically independent steps into 
separate functions to use in different situations).
 In @ref{Implementing TAB completion in Gnuastro}, we will review the 
generalities of Gnuastro's implementation of Bash TAB completion.
@@ -38093,7 +38086,7 @@ list. They will only reassign the issues in this list 
to the other two
 trackers if they are valid@footnote{Some of the issues registered here
 might be due to a mistake on the user's side, not an actual bug in the
 program.}. Ideally (when the developers have time to put on Gnuastro,
-please don't forget that Gnuastro is a volunteer effort), there should
+please do not forget that Gnuastro is a volunteer effort), there should
 be no open items in this tracker.
 
 @item Bugs
@@ -38230,7 +38223,7 @@ tutorial in @ref{Forking tutorial}.
 
 Recall that before starting the work on your idea, be sure to checkout the
 bugs and tasks trackers in @ref{Gnuastro project webpage} and announce your
-work there so you don't end up spending time on something others have
+work there so you do not end up spending time on something others have
 already worked on, and also to attract similarly interested developers to
 help you.
 
@@ -38292,7 +38285,7 @@ fetch, or pull from the official repository (see
 
 In the Gnuastro commit messages, we strive to follow these standards. Note
 that in the early phases of Gnuastro's development, we are experimenting
-and so if you notice earlier commits don't satisfy some of the guidelines
+and so if you notice earlier commits do not satisfy some of the guidelines
 below, it is because they predate that guideline.
 
 @table @asis
@@ -38369,35 +38362,35 @@ commit message (from @ref{Gnuastro project webpage}) 
for interested people
 to be able to followup the discussion that took place there. If the commit
 fixes a bug or finishes a task, the recommended way is to add a line after
 the body with `@code{This fixes bug #ID.}', or `@code{This finishes task
-#ID.}'. Don't assume that the reader has internet access to check the bug's
+#ID.}'. Do Not assume that the reader has internet access to check the bug's
 full description when reading the commit message, so give a short
 introduction too.
 @end itemize
 @end table
 
 
-Below you can see a good commit message example (don't forget to read it, it 
has tips for you).
+Below you can see a good commit message example (do not forget to read it, it 
has tips for you).
 After reading this, please run @command{git log} on the @code{master} branch 
and read some of the recent commits for more realistic examples.
 
 @example
 The first line should be the title of the commit
 
-An empty line is necessary after the title so Git doesn't confuse
+An empty line is necessary after the title so Git does not confuse
 lines. This top paragraph of the body of the commit usually describes
 the reason this commit was done. Therefore it usually starts with
 "Until now ...". It is very useful to explain the reason behind the
-change, things that aren't immediately obvious when looking into the
-code. You don't need to list the names of the files, or what lines
-have been changed, don't forget that the code changes are fully
+change, things that are not immediately obvious when looking into the
+code. You do not need to list the names of the files, or what lines
+have been changed, do not forget that the code changes are fully
 stored within Git :-).
 
 In the second paragraph (or any later paragraph!) of the body, we
 describe the solution and why (not "how"!) the particular solution
 was implemented. So we usually start this part of the commit body
-with "With this commit ...". Again, you don't need to go into the
+with "With this commit ...". Again, you do not need to go into the
 details that can be seen from the 'git diff' command (like the
 file names that have been changed or the code that has been
-implemented). The important thing here is the things that aren't
+implemented). The important thing here is the things that are not
 immediately obvious from looking into the code.
 
 You can continue the explanation and it is encouraged to be very
@@ -38435,7 +38428,7 @@ This is more suited for people who commonly contribute 
to the code (see @ref{For
 @end enumerate
 
 In both cases, your commits (with your name and information) will be preserved 
and your contributions will thus be fully recorded in the history of Gnuastro 
and in the @file{AUTHORS} file and this book (second page in the PDF format) 
once they have been incorporated into the official repository.
-Needless to say that in such cases, be sure to follow the bug or task trackers 
(or subscribe to the @command{gnuastro-devel} mailing list) and contact us 
before hand so you don't do something that someone else is already working on.
+Needless to say that in such cases, be sure to follow the bug or task trackers 
(or subscribe to the @command{gnuastro-devel} mailing list) and contact us 
before hand so you do not do something that someone else is already working on.
 In that case, you can get in touch with them and help the job go on faster, 
see @ref{Gnuastro project webpage}.
 This workflow is currently mostly borrowed from the general recommendations of 
Git@footnote{@url{https://github.com/git/git/blob/master/Documentation/SubmittingPatches}}
 and GitHub.
 But since Gnuastro is currently under heavy development, these might change 
and evolve to better suit our needs.
@@ -38452,7 +38445,7 @@ This is a tutorial on the second suggested method 
(commonly known as forking) th
 To start, please create an empty repository on your hosting service web page 
(we recommend Codeberg since it is fully free software@footnote{See 
@url{https://www.gnu.org/software/repo-criteria-evaluation.html} for an 
evaluation of the major existing repositories.
 Gnuastro uses GNU Savannah (which also has the highest ranking in the 
evaluation), but for starters, Codeberg may be easier (it is fully free 
software).}).
 If this is your first hosted repository on the web page, you also have to 
upload your public SSH key@footnote{For example see this explanation provided 
by Codeberg: @url{https://docs.codeberg.org/security/ssh-key}.} for the 
@command{git push} command below to work.
-Here we'll assume you use the name @file{janedoe} to refer to yourself 
everywhere and that you choose @file{gnuastro} as the name of your Gnuastro 
fork.
+Here we will assume you use the name @file{janedoe} to refer to yourself 
everywhere and that you choose @file{gnuastro} as the name of your Gnuastro 
fork.
 Any online hosting service will give you an address (similar to the 
`@file{git@@codeberg.org:...}' below) of the empty repository you have created 
using their web page, use that address in the third line below.
 
 @example
@@ -38463,15 +38456,15 @@ $ git push janedoe master
 @end example
 
 The full Gnuastro history is now pushed onto your hosting service and the 
@file{janedoe} remote is now also following your @file{master} branch.
-If you run @command{git remote show REMOTENAME} for the @file{origin} and 
@file{janedoe} remotes, you will see their difference: the first has pull 
access and the second doesn't.
+If you run @command{git remote show REMOTENAME} for the @file{origin} and 
@file{janedoe} remotes, you will see their difference: the first has pull 
access and the second does not.
 This nicely summarizes the main idea behind this workflow: you push to your 
remote repository, we pull from it and merge it into @file{master}, then you 
finalize it by pulling from the main repository.
 
 To test (compile) your changes during your work, you will need to bootstrap 
the version controlled source, see @ref{Bootstrapping} for a full description.
-The cloning process above is only necessary for your first time setup, you 
don't need to repeat it.
+The cloning process above is only necessary for your first time setup, you do 
not need to repeat it.
 However, please repeat the steps below for each independent issue you intend 
to work on.
 
 Let's assume you have found a bug in @file{lib/statistics.c}'s median  
calculating function.
-Before actually doing anything, please announce it (see @ref{Report a bug}) so 
everyone knows you are working on it or to find out others aren't already 
working on it.
+Before actually doing anything, please announce it (see @ref{Report a bug}) so 
everyone knows you are working on it or to find out others are noty already 
working on it.
 With the commands below, you make a branch, checkout to it, correct the bug, 
check if it is indeed fixed, add it to the staging area, commit it to the new 
branch and push it to your hosting service.
 But before all of them, make sure that you are on the @file{master} branch and 
that your @file{master} branch is up to date with the main Gnuastro repository 
with the first two commands.
 
@@ -38643,7 +38636,7 @@ This script will do sub-pixel re-positioning to make 
sure the star is centered a
 
 @item astscript-psf-unite
 (see @ref{Invoking astscript-psf-unite}) Unite the various components of a PSF 
into one.
-Because of saturation and non-linearity, to get a good estimate of the 
extended PSF, its necessary to construct various parts from different magnitude 
ranges.
+Because of saturation and non-linearity, to get a good estimate of the 
extended PSF, it is necessary to construct various parts from different 
magnitude ranges.
 
 @item astscript-psf-subtract
 (see @ref{Invoking astscript-psf-subtract}) Given the model of a PSF and the 
central coordinates of a star in the image, do sub-pixel re-positioning of the 
PSF, scale it to the star and subtract it from the image.
@@ -38699,8 +38692,8 @@ $ ds9
 
 @cartouche
 @noindent
-@strong{Install without root permissions:} If you don't have root permissions, 
you can simply replace @file{/usr/local/bin} in the command above with 
@file{$HOME/.local/bin}.
-If this directory isn't in your @code{PATH}, you can simply add it with the 
command below (in your startup file, e.g., @file{~/.bashrc}).
+@strong{Install without root permissions:} If you do not have root 
permissions, you can simply replace @file{/usr/local/bin} in the command above 
with @file{$HOME/.local/bin}.
+If this directory is not in your @code{PATH}, you can simply add it with the 
command below (in your startup file, e.g., @file{~/.bashrc}).
 For more on @code{PATH} and the startup files, see @ref{Installation 
directory}.
 
 @example
@@ -38771,8 +38764,8 @@ $ topcat table.fits
 
 @cartouche
 @noindent
-@strong{Install without root permissions:} If you don't have root permissions, 
you can simply replace @file{/usr/local/bin} in the command above with 
@file{$HOME/.local/bin}.
-If this directory isn't in your @code{PATH}, you can simply add it with the 
command below (in your startup file, e.g., @file{~/.bashrc}).
+@strong{Install without root permissions:} If you do not have root 
permissions, you can simply replace @file{/usr/local/bin} in the command above 
with @file{$HOME/.local/bin}.
+If this directory is not in your @code{PATH}, you can simply add it with the 
command below (in your startup file, e.g., @file{~/.bashrc}).
 For more on @code{PATH} and the startup files, see @ref{Installation 
directory}.
 
 @example
@@ -38828,7 +38821,7 @@ XWDRIV 1 /XWINDOW
 XWDRIV 2 /XSERVE
 @end example
 @noindent
-Don't choose GIF or VGIF, there is a problem in their codes.
+Do Not choose GIF or VGIF, there is a problem in their codes.
 
 Open the @file{PGPLOT/sys_linux/g77_gcc.conf} file:
 @example
@@ -38838,7 +38831,7 @@ $ gedit PGPLOT/sys_linux/g77_gcc.conf
 change the line saying: @code{FCOMPL="g77"} to
 @code{FCOMPL="gfortran"}, and save it. This is a very important step
 during the compilation of the code if you are in GNU/Linux. You now
-have to create a folder in @file{/usr/local}, don't forget to replace
+have to create a folder in @file{/usr/local}, do not forget to replace
 @file{PGPLOT} with your unpacked address:
 @example
 $ su
@@ -38946,7 +38939,7 @@ the name is the name of the header which declares or 
defines them, so to
 use the @code{gal_array_fset_const} function, you have to @code{#include
 <gnuastro/array.h>}. See @ref{Gnuastro library} for more. The
 @code{pthread_barrier} constructs are our implementation and are only
-available on systems that don't have them, see @ref{Implementation of
+available on systems that do not have them, see @ref{Implementation of
 pthread_barrier}.
 @printindex fn
 @unnumbered Index



reply via email to

[Prev in Thread] Current Thread [Next in Thread]