From MAILERDAEMON Tue Aug 06 13:04:24 2024
Received: from list by lists.gnu.org with archive (Exim 4.90_1)
id 1sbNbj0002MgMl
for mharcgnuastrocommits@gnu.org; Tue, 06 Aug 2024 13:04:24 0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
(Exim 4.90_1) (envelopefrom )
id 1sbNbh0002MMGr
for gnuastrocommits@gnu.org; Tue, 06 Aug 2024 13:04:21 0400
Received: from vcs2.savannah.gnu.org ([209.51.188.168])
by eggs.gnu.org with esmtp (Exim 4.90_1)
(envelopefrom ) id 1sbNbh0002nA8c
for gnuastrocommits@gnu.org; Tue, 06 Aug 2024 13:04:21 0400
Received: by vcs2.savannah.gnu.org (Postfix, from userid 133179)
id 21507C1CB14; Tue, 6 Aug 2024 13:04:21 0400 (EDT)
To: gnuastrocommits@gnu.org
Subject: [gnuastrocommits] master 29f0cfd8: Book: corrected example of
sigclip* operators
MIMEVersion: 1.0
ContentType: text/plain; charset=utf8
ContentTransferEncoding: 8bit
MailFollowupTo: gnuastrocommits@gnu.org,
Mohammad Akhlaghi
InReplyTo: <172296386036.8512.3698195013289364213@vcs2.savannah.gnu.org>
References: <172296386036.8512.3698195013289364213@vcs2.savannah.gnu.org>
XGitRepo: gnuastro
XGitRefname: refs/heads/master
XGitReftype: branch
XGitRev: 29f0cfd8f518d10e9fb42e30e14b0fe99ffab631
AutoSubmitted: autogenerated
MessageId: <20240806170421.21507C1CB14@vcs2.savannah.gnu.org>
Date: Tue, 6 Aug 2024 13:04:20 0400 (EDT)
From: Mohammad Akhlaghi
XBeenThere: gnuastrocommits@gnu.org
XMailmanVersion: 2.1.29
Precedence: list
ListId: Commits to the Gnuastro Git repository
ListUnsubscribe: ,
ListArchive:
ListPost:
ListHelp:
ListSubscribe: ,
XListReceivedDate: Tue, 06 Aug 2024 17:04:21 0000
branch: master
commit 29f0cfd8f518d10e9fb42e30e14b0fe99ffab631
Author: Mohammad Akhlaghi
Commit: Mohammad Akhlaghi
Book: corrected example of sigclip* operators
Until now, the last example command of the 'sigclip*' operators (under the
"Stacking operators" section of the book) contained the old
'madclipfillmedian' operator which was changed to the more
generic/powerful 'madclipmaskfilled' operator. This would cause a crash
when a reader tried the command with Gnuastro 0.23.
With this commit, the command has been corrected to use the operators that
are available at the moment.
This problem as reported by Alejandro LumbrerasCalle.

doc/announceacknowledge.txt  1 +
doc/gnuastro.texi  5 +++
2 files changed, 4 insertions(+), 2 deletions()
diff git a/doc/announceacknowledge.txt b/doc/announceacknowledge.txt
index 3423ea44..23664046 100644
 a/doc/announceacknowledge.txt
+++ b/doc/announceacknowledge.txt
@@ 1,5 +1,6 @@
Alphabetically ordered list to acknowledge in the next release.
+Alejandro LumbrerasCalle
Antonio Diaz Diaz
Hok Kan (Ronald) Tsang
Phil Wyett
diff git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 48aa914b..ae304e74 100644
 a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ 22250,8 +22250,9 @@ $ astarithmetic a.fits b.fits c.fits g1 \
In case you just want the numbers image, you can use @option{sigclipmedian} (which is always calculated as part of the clipping process: no extra overhead), and @code{free} the top operand (without the @code{swap}: the medianstack image), leaving only the numbers image:
@example
$ astarithmetic a.fits b.fits c.fits 3 5 0.01 \
 madclipfillmedian g1 free \
+$ astarithmetic a.fits b.fits c.fits g1 \
+ 3 5 0.01 madclipmaskfilled \
+ 3 4 0.1 sigclipmedian free \
output=singlehduonlynumbers.fits
@end example
From MAILERDAEMON Tue Aug 06 15:15:34 2024
Received: from list by lists.gnu.org with archive (Exim 4.90_1)
id 1sbPee0005FJQ4
for mharcgnuastrocommits@gnu.org; Tue, 06 Aug 2024 15:15:32 0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
(Exim 4.90_1) (envelopefrom )
id 1sbPeZ0005Et23
for gnuastrocommits@gnu.org; Tue, 06 Aug 2024 15:15:27 0400
Received: from vcs2.savannah.gnu.org ([209.51.188.168])
by eggs.gnu.org with esmtp (Exim 4.90_1)
(envelopefrom ) id 1sbPeY0004bwKN
for gnuastrocommits@gnu.org; Tue, 06 Aug 2024 15:15:26 0400
Received: by vcs2.savannah.gnu.org (Postfix, from userid 133179)
id 82D5FC1CB14; Tue, 6 Aug 2024 15:15:24 0400 (EDT)
To: gnuastrocommits@gnu.org
Subject: [gnuastrocommits] master b52cf301: Library (units.h): derivation
of jansky to wavelen. flux den. added
MIMEVersion: 1.0
ContentType: text/plain; charset=utf8
ContentTransferEncoding: 8bit
MailFollowupTo: gnuastrocommits@gnu.org,
Mohammad Akhlaghi
InReplyTo: <172297172419.22544.5198593331929228067@vcs2.savannah.gnu.org>
References: <172297172419.22544.5198593331929228067@vcs2.savannah.gnu.org>
XGitRepo: gnuastro
XGitRefname: refs/heads/master
XGitReftype: branch
XGitRev: b52cf3014fde4b51e728bada980538709a1c941d
AutoSubmitted: autogenerated
MessageId: <20240806191524.82D5FC1CB14@vcs2.savannah.gnu.org>
Date: Tue, 6 Aug 2024 15:15:24 0400 (EDT)
From: Mohammad Akhlaghi
XBeenThere: gnuastrocommits@gnu.org
XMailmanVersion: 2.1.29
Precedence: list
ListId: Commits to the Gnuastro Git repository
ListUnsubscribe: ,
ListArchive:
ListPost:
ListHelp:
ListSubscribe: ,
XListReceivedDate: Tue, 06 Aug 2024 19:15:29 0000
branch: master
commit b52cf3014fde4b51e728bada980538709a1c941d
Author: Mohammad Akhlaghi
Commit: Mohammad Akhlaghi
Library (units.h): derivation of jansky to wavelen. flux den. added
Until now we had just used the raw equation to convert Janskys to
wavelength flux density. This was vague and not easy to understand.
With this commit, a comment has been added on top to derive the equation
that is used. We should later bring this into the book.

lib/units.c  27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)
diff git a/lib/units.c b/lib/units.c
index 3776f24f..bd73d2b0 100644
 a/lib/units.c
+++ b/lib/units.c
@@ 514,6 +514,33 @@ gal_units_jy_to_mag(double jy)
+/* Converting Janskys ($f(\nu)$ or flux in units of frequency) to
+ $f(\lambda)$ (or wavelength flux density). In the equations below, we'll
+ write $\nu$ as 'n' and '\lambda' as 'l'. The speed of light is written
+ as 'c'.
+
+ Basics:
+
+ 1. c = 2.99792458e+08 m/s = 2.99792458e+18 A*Hz
+
+ 2. One Jansky is defined as 10^{23} erg/s/cm^2/Hz; see
+ https://en.wikipedia.org/wiki/Jansky
+
+ 3. The speed of light connects the wavelength and frequency of photons:
+ c=l*n. So n=c/l and taking the derivative: dn=(c/(l*l))*dl or
+ dn/dl=c/(l*l). Inserting physical values and units:
+
+ dn/dl = (2.99792458e+18 A*Hz)/(l*l A^2) = 2.99792458e+18/(l^2) Hz/A.
+
+ 4. To convert a function of A into a function of B, where A and B are
+ also related to each other, we have the following equation: f(A) =
+ dB/dA * f(B).
+
+ 5. Using 2 (definition of Jansky as f(n)) and 3 in 4, we get:
+
+ f(l) = 2.99792458e+18/(l^2) Hz/A * 10^{23} erg/s/cm^2/Hz
+ = 2.99792458e5/(l^2) erg/s/cm^2/A
+*/
double
gal_units_jy_to_wavelength_flux_density(double jy, double angstrom)
{
From MAILERDAEMON Wed Aug 07 15:03:04 2024
Received: from list by lists.gnu.org with archive (Exim 4.90_1)
id 1sblw70001C59K
for mharcgnuastrocommits@gnu.org; Wed, 07 Aug 2024 15:03:04 0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
(Exim 4.90_1) (envelopefrom )
id 1sblvz0001BCTu
for gnuastrocommits@gnu.org; Wed, 07 Aug 2024 15:02:56 0400
Received: from vcs2.savannah.gnu.org ([209.51.188.168])
by eggs.gnu.org with esmtp (Exim 4.90_1)
(envelopefrom ) id 1sblvz0002UTM4
for gnuastrocommits@gnu.org; Wed, 07 Aug 2024 15:02:55 0400
Received: by vcs2.savannah.gnu.org (Postfix, from userid 133179)
id 22C1AC1CB10; Wed, 7 Aug 2024 15:02:53 0400 (EDT)
To: gnuastrocommits@gnu.org
Subject: [gnuastrocommits] master d0ccbb06: Book: Rewrite of the
Brightness flux and magnitude section
MIMEVersion: 1.0
ContentType: text/plain; charset=utf8
ContentTransferEncoding: 8bit
MailFollowupTo: gnuastrocommits@gnu.org,
Mohammad Akhlaghi
InReplyTo: <172305737300.30002.11819306383278497232@vcs2.savannah.gnu.org>
References: <172305737300.30002.11819306383278497232@vcs2.savannah.gnu.org>
XGitRepo: gnuastro
XGitRefname: refs/heads/master
XGitReftype: branch
XGitRev: d0ccbb06e2843fbabf72d0c30d4b67bea83ca092
AutoSubmitted: autogenerated
MessageId: <20240807190255.22C1AC1CB10@vcs2.savannah.gnu.org>
Date: Wed, 7 Aug 2024 15:02:53 0400 (EDT)
From: Mohammad Akhlaghi
XBeenThere: gnuastrocommits@gnu.org
XMailmanVersion: 2.1.29
Precedence: list
ListId: Commits to the Gnuastro Git repository
ListUnsubscribe: ,
ListArchive:
ListPost:
ListHelp:
ListSubscribe: ,
XListReceivedDate: Wed, 07 Aug 2024 19:02:58 0000
branch: master
commit d0ccbb06e2843fbabf72d0c30d4b67bea83ca092
Author: Mohammad Akhlaghi
Commit: Mohammad Akhlaghi
Book: Rewrite of the Brightness flux and magnitude section
Until now, this important section hadn't defined the important concept of
spectral flux density and incorrectly used "Brightness" (erg/s) in its
definition of Magnitudes. This has no effects on any of the programs, just
in terms of proper definitions. Furthermore, the derivation of the
conversion between Janskys and wavelength flux density was only in the
comments of the code (and thus not visible to a regular user).
With this commit, the section was fully reviewed and rewritten to first
describe the physical concepts (brightness, flux, luminosity and spectral
flux density), then go into the astronomyspecific terminology of
magnitudes, zero point and surface brightness. Also, the derivation of the
conversion between Janskys and wavelength flux density was added to the
operator and the inverse operator was also defined.

NEWS  18 ++
doc/gnuastro.texi  228 +++++++++++++++++++++++++
lib/arithmetic.c  10 +
lib/gnuastro/arithmetic.h  1 +
lib/gnuastro/units.h  3 +
lib/units.c  33 ++
6 files changed, 160 insertions(+), 133 deletions()
diff git a/NEWS b/NEWS
index c02577a8..808be250 100644
 a/NEWS
+++ b/NEWS
@@ 15,6 +15,11 @@ See the end of the file for license conditions.
azimuthal ranges (all other pixels are ignored). Suggested and
implemented by Ignacio Ruiz Cejudo.
+*** Library
+
+  gal_units_wavelength_flux_density_to_jy: convert given wavelength flux
+ density (erg/s/cm^2/A) to Janskys.
+
** Removed features
** Changed features
** Bugs fixed
@@ 114,14 +119,15 @@ See the end of the file for license conditions.
example in the book for more.
*** Library
 gal_statistics_concentration: measure the concentration of values around
 the median; see the book for the details.
 gal_units_jy_to_wavelength_flux_density: convert Janskys to wavelength
 flux density.
+  gal_statistics_concentration: measure the concentration of values
+ around the median; see the book for the details.
+
+  gal_units_jy_to_wavelength_flux_density: convert Janskys to wavelength
+ flux density.
 gal_units_zeropoint_change: change the zero point of the input data set
 to an output zero point.
+  gal_units_zeropoint_change: change the zero point of the input data set
+ to an output zero point.
** Removed features
** Changed features
diff git a/doc/gnuastro.texi b/doc/gnuastro.texi
index ae304e74..61067857 100644
 a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ 21915,7 +21915,7 @@ Convert Janskys to AB magnitude, see @ref{Brightness flux magnitude}.
@cindex Wavelength flux density
@cindex Flux density (wavelength)
Convert Janskys to wavelength flux density (in units of @mymath{erg/cm^2/s/\AA}) at a certain wavelength (given in units of Angstroms).
Recall that Jansky is also a unit of flux density, but it is in units of frequency (@mymath{erg/cm^2/s/Hz}).
+Recall that Jansky is also a unit of spectral flux density, but it is in units of frequency (@mymath{erg/cm^2/s/Hz}).
For example at the wavelength of 5556, Vega's frequency flux density is 3500 Janskys.
To convert this to wavelength flux density, you can use this command:
@@ 21932,6 +21932,27 @@ $ astarithmetic 21 magtojy 6600 \
jytowavelengthfluxdensity
@end example
+The conversion is done based on this derivation: the speed of light (@mymath{c}) can be written as:
+
+@dispmath{c=2.99792458\times10^8 m/s = 2.99792458\times10^{18} \AA Hz}
+
+The speed of light connects the wavelength (@mymath{\lambda}) and frequency (@mymath{\nu}) of photons: @mymath{c=\nu\lambda}.
+Therefore, @mymath{\nu=c/\lambda} and taking the derivative: @mymath{d\nu=(c/\lambda^2)d\lambda} or @mymath{d\nu/d\lambda=c/\lambda^2}. Inserting physical values and units:
+
+@dispmath{d\nu/d\lambda = \frac{2.99792458\times10^{18} \AA Hz}{\lambda^2\AA^2} = \frac{2.99792458\times10^{18}}{\lambda^2} Hz/A}
+
+Recall that to convert a function of @mymath{\nu} into a function of @mymath{\lambda}, where @mymath{\nu} and @mymath{\lambda} are also related to each other, we have the following equation: @mymath{f(\lambda) = \frac{d\nu}{d\lambda} f(\nu)}.
+Here, @mymath{f(\nu)} is the value in Janskys (@mymath{J\times10^{23}erg/s/cm^2/Hz}; see @ref{Brightness flux magnitude}).
+Replacing @mymath{d\nu/d\lambda} from above we get:
+
+@dispmath{F_\lambda=\frac{2.99792458\times10^8}{\lambda^2}Hz/A \times J\times10^{23}erg/s/cm^2/Hz}
+
+@dispmath{F_\lambda=J\times\frac{2.99792458\times10^{5}}{\lambda^2} erg/s/cm^2/\AA}
+
+@item wavelengthfluxdensitytojy
+Convert wavelength flux density (in units of @mymath{erg/cm^2/s/\AA}) to Janskys at a certain wavelength (given in units of Angstroms).
+For details and usage examples, see the description of @code{jytowavelengthfluxdensity} (the inverse of this function).
+
@item autopc
@cindex Parsecs
@cindex Astronomical Units (AU)
@@ 29350,93 +29371,116 @@ It might even be so intertwined with its processing, that adding new columns mig
@node Brightness flux magnitude, Quantifying measurement limits, Detection and catalog production, MakeCatalog
@subsection Brightness, Flux, Magnitude and Surface brightness
@cindex ADU
@cindex Gain
@cindex Counts
Astronomical data pixels are usually in units of counts@footnote{Counts are also known as analog to digital units (ADU).} or electrons or either one divided by seconds.
To convert from the counts to electrons, you will need to know the instrument gain.
In any case, they can be directly converted to energy or energy/time using the basic hardware (telescope, camera and filter) information (that is summarized in the @emph{zero point}, and we will discuss below).
We will continue the discussion assuming the pixels are in units of energy/time.

@table @asis
@cindex Flux
@cindex Luminosity
@cindex Brightness
@item Brightness
The @emph{brightness} of an object is defined as its measured energy in units of time.
If our detector pixels directly measured the energy from the astronomical object@footnote{In practice, the measured pixels don't just count the astronomical object's energy: imaging detectors insert a certain bias level before the exposure, they amplify the photoelectrons, there are optical artifacts like flatfielding, and finally, there is the background light.}, then the brightness would be the total sum of pixel values (energy) associated to the object, divided by the exposure time.
+@cindex Quantum efficiency
+The @emph{brightness} of an object is defined as its @emph{measured} power (energy in units of time).
+In astrophysics cgi units are commonly used so brightness is reported in units of @mymath{erg/s}.
+In an image, the brightness of a nearby galaxy for example will be distributed over many pixels.
+Therefore, the brightness of an object in the image, is the total sum of the values in the pixels (proxy for the energy collected in that pixel) associated to the object, divided by the exposure time.
+
+Since it is @emph{measured}, brightness is not an inhrenet property of the object: at different distances, the same object will have different brightness measures.
+Brightness is also affected by many other factors, which we can summarize as @emph{artifacts}.
+Here are some examples (among many others):
+@itemize
+@item
+The brightness of other background/foreground sources in the same line of sight may be added to it.
+@item
+Photons may be absorbed or scattered on their way to the detector (which reduce its brightness).
+@item
+The distortion or point spread function of the optical system may change its shape.
+@item
+Electronic issues in the detector may cause different measurements of the same object in different pixels.
+@end itemize
+Through various data reduction and data analysis methods that are implemented on the raw observations, we (try to!) remove all these artifacts to have the ``pure'' brighntess of each object in the image.
+But dealing with the artifacts is beyond the scope of this section, so let's assume the reduction and analysis were done perfectly and we don't have artifacts.
+
+@cindex Luminosity
The @emph{flux} of an object is defined in units of energy/time/collectingarea.
For an astronomical target, the flux is therefore defined as its brightness divided by the area used to collect the light from the source; or the telescope aperture (for example, in units of @mymath{cm^2}).
+The collecting area is the telescope aperture (usually in units of @mymath{cm^2}).
+Therefore, flux is usually written in astrophysics literature as @mymath{erg/s/cm^2}.
Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
+Therefore, while flux and luminosity are intrinsic properties of the object, the brightness we measure depends on our detecting tools (hardware and software).
Therefore, while flux and luminosity are intrinsic properties of the object, brightness depends on our detecting tools (hardware and software).
In lowlevel observational astronomy data analysis, we are usually more concerned with measuring the brightness, because it is the thing we directly measure from the image pixels and create in catalogs.
On the other hand, luminosity is used in higherlevel analysis (after image contents are measured as catalogs to deduce physical interpretations, because highlevel things like distance/redshift need to be calculated).
At this stage, it is just important avoid confusion between luminosity and brightness because both have the same units of energy per seconds.
+@cindex Jansky (Jy)
+@cindex Spectral Flux Density
+@cindex Frequency Flux Density
+@cindex Wavelength Flux Density
+@cindex Flux density (spectral)
+An important fact that we have ignored until now is the wavelength (or frequency) range that the incoming brightness is measured in.
+Like other objects in nature, astronomical objects do not emit or reflect the same flux at all wavelengths.
+On the other hand, our detector techologies are different for different wavelength ranges.
+Therefore, even if we wanted to, there is no way to measure the ``total'' (at all wavelengths) brightness of an object with a single tool.
+It is therefore important to account for the wavelength (or frequency) range of the light that we are measuring.
+For that we have @emph{spectral flux density}, which is defined in either of these units (based on context): @mymath{erg/s/cm^2/Hz} (frequencybased) @mymath{erg/s/cm^2/\AA} (wavelengthbased).
+
+A ``Jansky'' is an existing unit of frequencybased spectral flux density (commonly used in radio astronomy), such that @mymath{1Jy=10^{23}erg/s/cm^2/Hz}.
+Janskys can be converted to wavelength flux density using the @code{jytowavelengthfluxdensity} operator of Gnuastro's Arithmetic program, see the derivation under this operator's description in @ref{Unit conversion operators}.
+
+Having clarified, the basic physical concepts above, let's review the terminology that is used in optical astronomy to refer to them.
+The reason optical astronomers don't use modern physical terminology is that optical astronomy precedes modern physical concepts by thousands of years.
+@table @asis
@item Magnitude
@cindex Magnitudes from flux
@cindex Flux to magnitude conversion
@cindex Astronomical Magnitude system
Images of astronomical objects span over a very large range of brightness: the Sun (as the brightest object) is roughly @mymath{2.5^{60}=10^{24}} times brighter than the fainter galaxies we can currently detect in the deepest images.
Therefore discussing brightness directly will involve a large range of values which is inconvenient.
So astronomers have chosen to use a logarithmic scale for the brightness of astronomical objects.
+The spectral flux density of astronomical objects span over a very large range: the Sun (as the brightest object) is roughly @mymath{10^{24}} times brighter than the fainter galaxies we can currently detect in our deepest images.
+Therefore discussing spectral flux density directly will involve a large range of values which can be inconvenient and hard to visualize/understand/discuss.
+Optical astronomers have chosen to use a logarithmic scale for the spectral flux density of astronomical objects.
@cindex Hipparchus of Nicaea
But the logarithm can only be usable with a dimensionless value that is always positive.
Fortunately brightness is always positive (at least in theory@footnote{In practice, for very faint objects, if the background brightness is oversubtracted, we may end up with a negative ``brightness'' or sum of pixels in a real object.}).
To remove the dimensions, we divide the brightness of the object (@mymath{B}) by a reference brightness (@mymath{B_r}).
We then define a logarithmic scale as @mymath{magnitude} through the relation below.
+But the logarithm can only be usable with a value which is always positive and has no units.
+Fortunately brightness is always positive.
+To remove the units, we divide the spectral flux density of the object (@mymath{F}) by a reference spectral flux density (@mymath{F_r}).
+We then define a logarithmic scale through the relation below and call it the @emph{magnitude}.
The @mymath{2.5} factor in the definition of magnitudes is a legacy of the our ancient colleagues and in particular Hipparchus of Nicaea (190120 BC).
@dispmath{mm_r=2.5\log_{10} \left( B \over B_r \right)}
+@dispmath{mm_r=2.5\log_{10} \left( F \over F_r \right)}
@noindent
@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the predefined magnitude of the reference brightness.
For estimating the error in measuring a magnitude, see @ref{Quantifying measurement limits}.
+The equation above is ultimately a relative relation.
+To tie it to physical units, astronomers use the concept of a zero point which is discussed in the next item.
+
@item Zero point
@cindex Zero point magnitude
@cindex Magnitude zero point
A unique situation in the magnitude equation above occurs when the reference brightness is unity (@mymath{B_r=1}).
This brightness will thus summarize all the hardwarespecific parameters discussed above (like the conversion of pixel values to physical units) into one number.
That reference magnitude is commonly known as the @emph{Zero point} magnitude because when @mymath{B=B_r=1}, the right side of the magnitude definition above will be zero.
Using the zero point magnitude (@mymath{Z}), we can write the magnitude relation above in a more simpler format:
+A unique situation in the magnitude equation above occurs when the reference spectral flux density is unity (@mymath{F_r=1}).
+In other words, the increase in brightness that produces to an increment in the detector's native measurement units (usually referred to as ``analog to digital units'', or ADUs, also known as ``counts'').
+The word ``increment'' was intentionally used because ADUs are discrete and measured as integer counts.
+In other words, a increase in spectral flux density that is below @mymath{F_r} will not be measured by the device.
+The reference magnitde (@mymath{m_r}) that corresponds to @mymath{F_r} is known as the @emph{Zero point} magnitude of that image.
@dispmath{m = 2.5\log_{10}(B) + Z}
+The increase in brightness (from an astrophysical source) that produces an increment in ADUs depends on all hardware and observational parameters that the image was taken in.
+These include the quantum efficiency of the dector, the detector's coating, the transmission of the optical path, the filter transmission curve, the atmospheric absorption (for groundbased images; for example thin highaltitude clouds or at low altitudes) and etc.
+The zero point therefore allows us to summarize all these ``observational'' (nonastrophysical) factors into a single number and compare different observations from different instruments at different times (critical to do science).
+
+Using the zero point magnitude (@mymath{m_r=Z}), we can write the magnitude relation above in a simpler format (recall that @mymath{F_r=1}):
+
+@dispmath{m = 2.5\log_{10}(F) + Z}
@cindex Jansky (Jy)
@cindex AB magnitude
@cindex Magnitude, AB
Gnuastro has an installed script to estimate the zero point of any image, see @ref{Zero point estimation} (it contains practical tutorials to help you get started fast).
Having the zero point of an image, you can convert its pixel values to physical units like microJanskys (or @mymath{\mu{}Jy}).
This enables direct pixelbased comparisons with images from other instruments@footnote{Comparing data from different instruments assumes instrument and observation signatures are properly corrected, things like the flatfield or the Sky absorption.
It is also valid for pixel values, assuming that factors that can change the morphology (like the @ref{PSF}) are the same.}.
Jansky is a commonly used unit for measuring spectral flux density and one Jansky is equivalent to @mymath{10^{26} W/m^2/Hz} (watts per square meter per hertz).

This conversion can be done with the fact that in the AB magnitude standard@footnote{@url{https://en.wikipedia.org/wiki/AB_magnitude}}, @mymath{3631Jy} corresponds to the zeroth magnitude, therefore @mymath{B\equiv3631\times10^{6}\mu{Jy}} and @mymath{m\equiv0}.
We can therefore estimate the brightness (@mymath{B_z}, in @mymath{\mu{Jy}}) corresponding to the image zero point (@mymath{Z}) using this equation:
+The zero point is found through comparison of measurements with predefined standards (in other words, it is a calibration of the pixel values).
+Gnuastro has an installed script with a complete tutorial to estimate the zero point of any image, see @ref{Zero point estimation}.
@dispmath{m  Z = 2.5\log_{10}(B/B_z)}
@dispmath{0  Z = 2.5\log_{10}({3631\times10^{6}\over B_z})}
@dispmath{B_z = 3631\times10^{\left(6  {Z \over 2.5} \right)} \mu{Jy}}
+Having the zero point of an image, you can convert its pixel values to the same physical units as the reference that the zero point was measured on.
+Historically, the reference was defined to be measurements of the star Vega (with the same instrument and same environment), producing the @emph{vega magnitude} sysytem where the star Vega had a magnitude of zero (similar to the catalog of Hipparchus of Nicaea).
+However, this caused many problems because Vega itself has its unique spectral features which are not in other stars and it is a variable star when measured precisely.
@cindex SDSS
Because the image zero point corresponds to a pixel value of @mymath{1}, the @mymath{B_z} value calculated above also corresponds to a pixel value of @mymath{1}.
Therefore you simply have to multiply your image by @mymath{B_z} to convert it to @mymath{\mu{Jy}}.
Do not forget that this only applies when your zero point was also estimated in the AB magnitude system.
On the commandline, you can estimate this value for a certain zero point with AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
For example, let's assume you are using an SDSS image with a zero point of 22.5:
+Therefore, based on previous efforts, in 1983 Oke & Gunn @url{https://ui.adsabs.harvard.edu/abs/1983ApJ...266..713O,proposed} the AB (absolute) magnitude system from accurate spectroscopy of Vega.
+To avoid confusion with the ``absolute magnitude'' of a source (at a fixed distance), this magnitude system is always written as AB magnitude.
+The equation below was defined such that a star with a flat spectra around @mymath{5480\AA} have a similar magnitude in the AB and Vegabased systems, where @mymath{F_\nu} is the frequencybased spectral flux density (in units of @mymath{erg/s/cm^2/Hz}):
@example
bz=$(echo 22.5  awk '@{print 3631 * 10^(6$1/2.5)@}')
astarithmetic sdss.fits $bz x output=sdssinmuJy.fits
@end example
+@dispmath{m_{AB} = 2.5\log_{10}(F_\nu) + 48.60}
@noindent
But in Gnuastro, it gets even easier: Arithmetic has an operator called @code{countstojy}.
This will directly convert your image pixels (in units of counts) to Janskys though a provided AB Magnitudebased zero point like below.
See @ref{Arithmetic operators} for more.
+Reversing this equation and using Janskys, an object with a magnitude of zero (@mymath{m_{AB}=0}) has a spectral flux density of @mymath{3631Jy}.
+Once the AB magnitude zero point of an image is found, you can directly convert any measurement on it from instrument ``counts'' (ADUs) to Janskys.
+In Gnuastro, the Arithmetic program has an operator called @code{countstojy} which will do this though a given AB Magnitudebased zero point like below (SDSS data have a fixed zero point of 22.5 in the AB magnitude system):
@example
$ astarithmetic sdss.fits 22.5 countstojy
@@ 29444,33 +29488,24 @@ $ astarithmetic sdss.fits 22.5 countstojy
@cartouche
@noindent
@strong{Be careful with the exposure time:} as described at the start of this section, we are assuming your data are in units of counts/sec.
As a result, the counts you get from the command above, are only for one second of exposure!
Please see the discussion below in ``Magnitude to counts'' for more.
+@strong{Verify the zero point usage in from new databases:} observational factors like the exposure time, the gain (how many electrons correspond to one ADU), telescope aperture, filter transmission curve and other factors are usually taken into account in the reduction pipeline that produces highlevel science products to provide a zero point that directly converts pixel values (in what ever units) to Janskys.
+But some reduction pipelines may not account for some these for special reasons: for example not account for the gain or exposure time.
+To avoid annoying strange results, when using a new database, verify that the zero points they provide directly converts pixel values to Janskys (is an AB magnitude zero point), or not.
+If not, you can follow steps described below.
+This information is usually in the documentation of the database.
@end cartouche
@item Magnitude to counts (accounting for exposure time)
@cindex Exposure time
Until now, we had assumed that the data are in units of counts/sec.
As a result, the equations given above (in the ``Zero point'' item to convert magnitudes to pixel counts), give the count level for the reference (1 second) exposure.
But we rarely take 1 second exposures!
It is therefore very important to take the exposure time into account in scenarios like simulating observations with varying exposure times (where you need to know how many counts the object of a certain magnitude will add to a certain image with a certain exposure time).

To clarify the concept, let's define @mymath{C} as the @emph{counted} electrons (which has a linear relation with the photon energy entering the CCD pixel).
In this case, if an object of brightness @mymath{B} is observed for @mymath{t} seconds, it will accumulate @mymath{C=B\times t} counts@footnote{Recall that counts another name for ADUs, which already includes the CCD gain.}.
Therefore, the generic magnitude equation above can be written as:
@dispmath{m = 2.5\log_{10}(B) + Z = 2.5\log_{10}(C/t) + Z}
@noindent
From this, we can derive @mymath{C(t)} in relation to @mymath{C(1)}, or counts from a 1 second exposure, using this relation:
@dispmath{C(t) = t\times10^{(mZ)/2.5} = t\times C(1)}
In other words, you should simply multiply the counts for one second with the number of observed seconds.

Another approach is to shift the timedependence of the counts into the zero point (after all exposure time is also a hardware issue).
Let's derive the equation below:
@dispmath{m = 2.5\log_{10}(C/t) + Z = 2.5\log_{10}(C) + 2.5\log_{10}(t) + Z}
Therefore, defining an exposuretimedependent zero point as @mymath{Z(t)}, we can directly correlate a certain object's magnitude with counts after an exposure of @mymath{t} seconds:
@dispmath{m = 2.5\log_{10}(C) + Z(t) \quad\rm{where}\quad Z(t)=Z + 2.5\log_{10}(t)}
This solution is useful in programs like @ref{MakeCatalog} or @ref{MakeProfiles}, when you cannot (or do not want to: because of the extra storage/speed costs) manipulate the values image (for example, divide it by the exposure time to use a counts/sec zero point).
+Let's look at one example where the given zero point has not accounted for the exposure time (in other words it is only for a fixed exposure time: @mymath{Z_E}), but the pixel values (@mymath{p}) have been corrected for the exposure time.
+One solution would be to first multiply the pixels by the exposure time, use that zero point and delete the temporary file.
+But a more optimal way (in terms of storage, execution and clean code) would be to correct the zero point.
+Let's take @mymath{t} to show time in units of seconds and @mymath{p_E} to be the pixel value that would be measured after the the fixed exposure time (in other words @mymath{p_E=p\times t}).
+We then have the following:
+
+@dispmath{m = 2.5\log_{10}(p_E) + Z_E = 2.5\log_{10}(p\times t) + Z_E}
+
+From the properties of logarithms, we can then derive the correct zero point (@mymath{Z}) to use directly (without touching the original pixels):
+
+@dispmath{m = 2.5\log_{10}(p) + Z \quad\rm{where}\quad Z = Z_E  2.5\log_{10}(t)}
@item Surface brightness
@cindex Steradian
@@ 29478,15 +29513,15 @@ This solution is useful in programs like @ref{MakeCatalog} or @ref{MakeProfiles}
@cindex Celestial sphere
@cindex Surface brightness
@cindex SI (International System of Units)
Another important concept is the distribution of an object's brightness over its area.
+An important concept is the distribution of an object's brightness over its area.
For this, we define the @emph{surface brightness} to be the magnitude of an object's brightness divided by its solid angle over the celestial sphere (or coverage in the sky, commonly in units of arcsec@mymath{^2}).
The solid angle is expressed in units of arcsec@mymath{^2} because astronomical targets are usually much smaller than one steradian.
Recall that the steradian is the dimensionless SI unit of a solid angle and 1 steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial sphere.
Surface brightness is therefore most commonly expressed in units of mag/arcsec@mymath{^2}.
For example, when the brightness is measured over an area of A arcsec@mymath{^2}, then the surface brightness becomes:
+For example, when the spectral flux density is measured over an area of A arcsec@mymath{^2}, then the surface brightness becomes:
@dispmath{S = 2.5\log_{10}(B/A) + Z = 2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
+@dispmath{S = 2.5\log_{10}(F/A) + Z = 2.5\log_{10}(F) + 2.5\log_{10}(A) + Z}
@noindent
In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units of arcsec@mymath{^2}) through this equation:
@@ 29495,22 +29530,13 @@ In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is re
A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and divide the object's magnitude by its area.
But this is wrong because magnitude is a logarithmic scale while area is linear.
It is the brightness that should be divided by the solid angle because both have linear scales.
+It is the spectral flux density that should be divided by the solid angle because both have linear scales.
The magnitude of that ratio is then defined to be the surface brightness.
One usual application of this is to convert an image's pixel values to surface brightness, when you know its zero point.
This can be done with the two simple commands below.
First, we derive the pixel area (in arcsec@mymath{^2}) then we use Arithmetic to convert the pixels into surface brightness, see below for the details.

@example
$ zeropoint=22.5
$ pixarea=$(astfits image.fits pixelareaarcsec2)
$ astarithmetic image.fits $zeropoint $pixarea countstosb \
 output=imagesb.fits
@end example

See @ref{Reverse polish notation} for more on Arithmetic's notation and @ref{Arithmetic operators} for a description of each operator.
And see @ref{FITS images in a publication} for a fully working tutorial on how to optimally convert a FITS image to a PDF image for usage in a publication using the surface brightness conversion shown above.
+Besides applications in catalogs and the resulting scientific analysis, converting pixels to surface brightness is usually a good way to display a FITS file in a publication!
+See @ref{FITS images in a publication} for a fully working tutorial on how to do this.
+@end table
@cartouche
@noindent
@@ 29518,11 +29544,9 @@ And see @ref{FITS images in a publication} for a fully working tutorial on how t
Convolution is also a process of finding the weighted mean of pixel values.
During these processes, many arithmetic operations are done on the original pixel values, for example, addition or multiplication.
However, @mymath{log_{10}(a+b)\ne log_{10}(a)+log_{10}(b)}.
Therefore after calculating a magnitude or surface brightness image, do not apply any such operations on it!
If you need to warp or convolve the image, do it @emph{before} the conversion.
+Therefore if you generate color, magnitude or surface brightness images (where pixels are in units of mangnitudes), do not apply any such operations on them!
+If you need to warp or convolve the image, do it @emph{before} the conversion to magnitudebased units.
@end cartouche
@end table

diff git a/lib/arithmetic.c b/lib/arithmetic.c
index 4cafe8eb..6dc49d51 100644
 a/lib/arithmetic.c
+++ b/lib/arithmetic.c
@@ 3098,6 +3098,9 @@ arithmetic_function_binary_flt(int operator, int flags, gal_data_t *il,
case GAL_ARITHMETIC_OP_JY_TO_WAVELENGTH_FLUX_DENSITY:
BINFUNC_F_OPERATOR_SET( gal_units_jy_to_wavelength_flux_density,
+0 ); break;
+ case GAL_ARITHMETIC_OP_WAVELENGTH_FLUX_DENSITY_TO_JY:
+ BINFUNC_F_OPERATOR_SET( gal_units_wavelength_flux_density_to_jy,
+ +0 ); break;
case GAL_ARITHMETIC_OP_COUNTS_TO_NANOMAGGY:
BINFUNC_F_OPERATOR_SET( gal_units_counts_to_nanomaggy, +0 ); break;
case GAL_ARITHMETIC_OP_NANOMAGGY_TO_COUNTS:
@@ 4114,6 +4117,9 @@ gal_arithmetic_set_operator(char *string, size_t *num_operands)
else if (!strcmp(string, "jytowavelengthfluxdensity"))
{ op=GAL_ARITHMETIC_OP_JY_TO_WAVELENGTH_FLUX_DENSITY;
*num_operands=2; }
+ else if (!strcmp(string, "wavelengthfluxdensitytojy"))
+ { op=GAL_ARITHMETIC_OP_WAVELENGTH_FLUX_DENSITY_TO_JY;
+ *num_operands=2; }
/* Celestial coordinate conversions. */
else if (!strcmp(string, "eqb1950toeqj2000"))
@@ 4463,6 +4469,8 @@ gal_arithmetic_operator_string(int operator)
case GAL_ARITHMETIC_OP_JY_TO_MAG: return "jytomag";
case GAL_ARITHMETIC_OP_JY_TO_WAVELENGTH_FLUX_DENSITY:
return "jytowavelengthfluxdensity";
+ case GAL_ARITHMETIC_OP_WAVELENGTH_FLUX_DENSITY_TO_JY:
+ return "wavelengthfluxdensitytojy";
case GAL_ARITHMETIC_OP_AU_TO_PC: return "autopc";
case GAL_ARITHMETIC_OP_PC_TO_AU: return "pctoau";
case GAL_ARITHMETIC_OP_LY_TO_PC: return "lytopc";
@@ 4757,6 +4765,7 @@ gal_arithmetic(int operator, size_t numthreads, int flags, ...)
case GAL_ARITHMETIC_OP_NANOMAGGY_TO_COUNTS:
case GAL_ARITHMETIC_OP_COUNTS_TO_NANOMAGGY:
case GAL_ARITHMETIC_OP_JY_TO_WAVELENGTH_FLUX_DENSITY:
+ case GAL_ARITHMETIC_OP_WAVELENGTH_FLUX_DENSITY_TO_JY:
d1 = va_arg(va, gal_data_t *);
d2 = va_arg(va, gal_data_t *);
out=arithmetic_function_binary_flt(operator, flags, d1, d2);
@@ 4776,7 +4785,6 @@ gal_arithmetic(int operator, size_t numthreads, int flags, ...)
d2 = va_arg(va, gal_data_t *);
d3 = va_arg(va, gal_data_t *);
out=arithmetic_counts_to_from_sb(operator, flags, d1, d2, d3);

break;
/* Statistical operators that return one value. */
diff git a/lib/gnuastro/arithmetic.h b/lib/gnuastro/arithmetic.h
index 53600bca..31716cea 100644
 a/lib/gnuastro/arithmetic.h
+++ b/lib/gnuastro/arithmetic.h
@@ 158,6 +158,7 @@ enum gal_arithmetic_operators
GAL_ARITHMETIC_OP_MAG_TO_JY, /* Magnitude to Janskys. */
GAL_ARITHMETIC_OP_JY_TO_MAG, /* Janskys to Magnitude. */
GAL_ARITHMETIC_OP_JY_TO_WAVELENGTH_FLUX_DENSITY, /* Janskys to f_\lambda. */
+ GAL_ARITHMETIC_OP_WAVELENGTH_FLUX_DENSITY_TO_JY, /* f_\lambda to Janskys. */
GAL_ARITHMETIC_OP_ZEROPOINT_CHANGE, /* Change the zero point. */
GAL_ARITHMETIC_OP_COUNTS_TO_NANOMAGGY,/* Counts to SDSS nanomaggies. */
GAL_ARITHMETIC_OP_NANOMAGGY_TO_COUNTS,/* SDSS nanomaggies to counts. */
diff git a/lib/gnuastro/units.h b/lib/gnuastro/units.h
index 1e33df35..e02bee5e 100644
 a/lib/gnuastro/units.h
+++ b/lib/gnuastro/units.h
@@ 115,6 +115,9 @@ gal_units_jy_to_mag(double jy);
double
gal_units_jy_to_wavelength_flux_density(double jy, double wavelength);
+double
+gal_units_wavelength_flux_density_to_jy(double wfd, double angstrom);
+
double
gal_units_mag_to_jy(double mag);
diff git a/lib/units.c b/lib/units.c
index bd73d2b0..08cd5a3f 100644
 a/lib/units.c
+++ b/lib/units.c
@@ 515,42 +515,27 @@ gal_units_jy_to_mag(double jy)
/* Converting Janskys ($f(\nu)$ or flux in units of frequency) to
 $f(\lambda)$ (or wavelength flux density). In the equations below, we'll
 write $\nu$ as 'n' and '\lambda' as 'l'. The speed of light is written
 as 'c'.

 Basics:

 1. c = 2.99792458e+08 m/s = 2.99792458e+18 A*Hz

 2. One Jansky is defined as 10^{23} erg/s/cm^2/Hz; see
 https://en.wikipedia.org/wiki/Jansky
+ $f(\lambda)$ (or wavelength flux density). See the description of this
+ operator in the book for its derivation.*/
+double
+gal_units_jy_to_wavelength_flux_density(double jy, double angstrom)
+{
+ return jy * 2.99792458e05 / (angstrom*angstrom);
+}
 3. The speed of light connects the wavelength and frequency of photons:
 c=l*n. So n=c/l and taking the derivative: dn=(c/(l*l))*dl or
 dn/dl=c/(l*l). Inserting physical values and units:
 dn/dl = (2.99792458e+18 A*Hz)/(l*l A^2) = 2.99792458e+18/(l^2) Hz/A.
 4. To convert a function of A into a function of B, where A and B are
 also related to each other, we have the following equation: f(A) =
 dB/dA * f(B).
 5. Using 2 (definition of Jansky as f(n)) and 3 in 4, we get:
 f(l) = 2.99792458e+18/(l^2) Hz/A * 10^{23} erg/s/cm^2/Hz
 = 2.99792458e5/(l^2) erg/s/cm^2/A
*/
double
gal_units_jy_to_wavelength_flux_density(double jy, double angstrom)
+gal_units_wavelength_flux_density_to_jy(double wfd, double angstrom)
{
 return jy * 2.99792458e05 / (angstrom*angstrom);
+ return wfd * (angstrom*angstrom) / 2.99792458e05;
}

double
gal_units_mag_to_jy(double mag)
{
From MAILERDAEMON Thu Aug 08 07:42:39 2024
Received: from list by lists.gnu.org with archive (Exim 4.90_1)
id 1sc1XT0007YaCx
for mharcgnuastrocommits@gnu.org; Thu, 08 Aug 2024 07:42:39 0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
(Exim 4.90_1) (envelopefrom )
id 1sc1XS0007YR8f
for gnuastrocommits@gnu.org; Thu, 08 Aug 2024 07:42:38 0400
Received: from vcs2.savannah.gnu.org ([209.51.188.168])
by eggs.gnu.org with esmtp (Exim 4.90_1)
(envelopefrom ) id 1sc1XR00072KUb
for gnuastrocommits@gnu.org; Thu, 08 Aug 2024 07:42:37 0400
Received: by vcs2.savannah.gnu.org (Postfix, from userid 133179)
id 1FD8DC1CB19; Thu, 8 Aug 2024 07:42:34 0400 (EDT)
To: gnuastrocommits@gnu.org
Subject: [gnuastrocommits] master 7d61e871: Book: further edits on
brightness, flux and magnitude section
MIMEVersion: 1.0
ContentType: text/plain; charset=utf8
ContentTransferEncoding: 8bit
MailFollowupTo: gnuastrocommits@gnu.org,
Mohammad Akhlaghi
InReplyTo: <172311735378.20893.1447289146883760083@vcs2.savannah.gnu.org>
References: <172311735378.20893.1447289146883760083@vcs2.savannah.gnu.org>
XGitRepo: gnuastro
XGitRefname: refs/heads/master
XGitReftype: branch
XGitRev: 7d61e87186a53a4d7eb2d08961f74d22d2032efd
AutoSubmitted: autogenerated
MessageId: <20240808114235.1FD8DC1CB19@vcs2.savannah.gnu.org>
Date: Thu, 8 Aug 2024 07:42:34 0400 (EDT)
From: Mohammad Akhlaghi
XBeenThere: gnuastrocommits@gnu.org
XMailmanVersion: 2.1.29
Precedence: list
ListId: Commits to the Gnuastro Git repository
ListUnsubscribe: ,
ListArchive:
ListPost:
ListHelp:
ListSubscribe: ,
XListReceivedDate: Thu, 08 Aug 2024 11:42:38 0000
branch: master
commit 7d61e87186a53a4d7eb2d08961f74d22d2032efd
Author: Mohammad Akhlaghi
Commit: Mohammad Akhlaghi
Book: further edits on brightness, flux and magnitude section
Until now, the various important physical concepts (and their units) were
blended into the paragraph texts, making them hard to notice for a fast
access to one definition. However, later for the optical astronomy
terminology, we had divided each in a separate "table".
With this commit, the physical concepts are also discussed in a similar
format to the optical astronomy terms. The text was also edited to fit into
this structure.

doc/gnuastro.texi  109 ++++++++++++++++++++++++++
1 file changed, 53 insertions(+), 56 deletions()
diff git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 61067857..21505c7b 100644
 a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ 29371,55 +29371,52 @@ It might even be so intertwined with its processing, that adding new columns mig
@node Brightness flux magnitude, Quantifying measurement limits, Detection and catalog production, MakeCatalog
@subsection Brightness, Flux, Magnitude and Surface brightness
@cindex Flux
@cindex Luminosity
+After taking an image with your smartphone camera (or a large telescope), the value in each pixel of the output file is a proxy for the amount of @emph{energy} that accumulated in it (while it was exposed to light).
+In astrophysics, the centimetre–gram–second (cgi) units are commonly used so energy is commonly reported in units of @mymath{erg}.
+In an image, the energy of a galaxy for example will be distributed over many pixels.
+Therefore, the collected energy of an object in the image is the total sum of the values in the pixels associated to the object.
+
+To be able to compare our scientific data with other data, optical astronomers have a unique terminology based on the concept of ``Magnitude''s.
+But before getting to those, let's review the following basic physical concepts first:
+
+@table @asis
+@item Brightness (@mymath{erg/s})
@cindex Brightness
@cindex Quantum efficiency
The @emph{brightness} of an object is defined as its @emph{measured} power (energy in units of time).
In astrophysics cgi units are commonly used so brightness is reported in units of @mymath{erg/s}.
In an image, the brightness of a nearby galaxy for example will be distributed over many pixels.
Therefore, the brightness of an object in the image, is the total sum of the values in the pixels (proxy for the energy collected in that pixel) associated to the object, divided by the exposure time.

Since it is @emph{measured}, brightness is not an inhrenet property of the object: at different distances, the same object will have different brightness measures.
Brightness is also affected by many other factors, which we can summarize as @emph{artifacts}.
Here are some examples (among many others):
@itemize
@item
The brightness of other background/foreground sources in the same line of sight may be added to it.
@item
Photons may be absorbed or scattered on their way to the detector (which reduce its brightness).
@item
The distortion or point spread function of the optical system may change its shape.
@item
Electronic issues in the detector may cause different measurements of the same object in different pixels.
@end itemize
Through various data reduction and data analysis methods that are implemented on the raw observations, we (try to!) remove all these artifacts to have the ``pure'' brighntess of each object in the image.
But dealing with the artifacts is beyond the scope of this section, so let's assume the reduction and analysis were done perfectly and we don't have artifacts.
+To be able to compare with data taken with different exposure times, we define the @emph{brightness} which is the measured power (energy divided by time).
+
+@item Flux (@mymath{erg/s/cm^2})
+@cindex Flux
+To be able to compare with data from different telescope collecting areas, we define the @emph{flux} which is defined in units of brightness per collectingarea.
+This area is historically reported in units of @mymath{cm^2}.
@cindex Luminosity
The @emph{flux} of an object is defined in units of energy/time/collectingarea.
The collecting area is the telescope aperture (usually in units of @mymath{cm^2}).
Therefore, flux is usually written in astrophysics literature as @mymath{erg/s/cm^2}.
Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
Therefore, while flux and luminosity are intrinsic properties of the object, the brightness we measure depends on our detecting tools (hardware and software).
+This is the total power, in @mymath{erg/s}, the object emits in all directions.
+Luminosity has the same units as brighntess, but as shown above its intepretations is very different: unlike brightness (a measured property), luminosity is an inherent property of the object that is calculated from the combination of multiple measurements (flux and distance).
+Our focus in this section is on direct measurements of electromagnetic energy, not positionrelated measurements, so we do not use or describe luminosity any more in this section and have not allocated a separate item in this list for it.
@cindex Jansky (Jy)
+@item Spectral flux density (@mymath{erg/s/cm^2/Hz} or @mymath{erg/s/cm^2/\AA})
@cindex Spectral Flux Density
@cindex Frequency Flux Density
@cindex Wavelength Flux Density
@cindex Flux density (spectral)
An important fact that we have ignored until now is the wavelength (or frequency) range that the incoming brightness is measured in.
+To take into account the spectral coverage of our data, we define the @emph{spectral flux density}, which is defined in either of these units (based on context): @mymath{erg/s/cm^2/Hz} (frequencybased) @mymath{erg/s/cm^2/\AA} (wavelengthbased).
+
Like other objects in nature, astronomical objects do not emit or reflect the same flux at all wavelengths.
On the other hand, our detector techologies are different for different wavelength ranges.
Therefore, even if we wanted to, there is no way to measure the ``total'' (at all wavelengths) brightness of an object with a single tool.
It is therefore important to account for the wavelength (or frequency) range of the light that we are measuring.
For that we have @emph{spectral flux density}, which is defined in either of these units (based on context): @mymath{erg/s/cm^2/Hz} (frequencybased) @mymath{erg/s/cm^2/\AA} (wavelengthbased).
+To be able to analyze objects with different spectral features, it is therefore important to account for the wavelength (or frequency) range of the photons that we measured through the spectral flux density.
A ``Jansky'' is an existing unit of frequencybased spectral flux density (commonly used in radio astronomy), such that @mymath{1Jy=10^{23}erg/s/cm^2/Hz}.
+@item Jansky (@mymath{10^{23}erg/s/cm^2/Hz})
+@cindex Jansky (Jy)
+A ``Jansky'' is a certain value of frequency flux density (commonly used in radio astronomy).
Janskys can be converted to wavelength flux density using the @code{jytowavelengthfluxdensity} operator of Gnuastro's Arithmetic program, see the derivation under this operator's description in @ref{Unit conversion operators}.
+@end table
Having clarified, the basic physical concepts above, let's review the terminology that is used in optical astronomy to refer to them.
+Having clarified, the basic physical concepts above, let's review the terminology that is used in optical astronomy.
The reason optical astronomers don't use modern physical terminology is that optical astronomy precedes modern physical concepts by thousands of years.
+As a result, once the modern physical concepts where mature enough, optical astronomers found the correct conversion factors to better define their own terminology (and easily use previous results) instead of abandoning them.
+Other fields of astronomy (for example Xray or radio) were discovered in the last century when modern physical concepts had already matured and were being extensively used, so for those fields, the concepts above are enough.
@table @asis
@item Magnitude
@@ 29427,20 +29424,19 @@ The reason optical astronomers don't use modern physical terminology is that opt
@cindex Flux to magnitude conversion
@cindex Astronomical Magnitude system
The spectral flux density of astronomical objects span over a very large range: the Sun (as the brightest object) is roughly @mymath{10^{24}} times brighter than the fainter galaxies we can currently detect in our deepest images.
Therefore discussing spectral flux density directly will involve a large range of values which can be inconvenient and hard to visualize/understand/discuss.
Optical astronomers have chosen to use a logarithmic scale for the spectral flux density of astronomical objects.
+Therefore the scale that was originally used from the ancient times to measure the incoming light (written by Hipparchus of Nicaea; 190120 BC) can be nicely parametrized as a logarithmic function of the spectral flux density.
@cindex Hipparchus of Nicaea
But the logarithm can only be usable with a value which is always positive and has no units.
Fortunately brightness is always positive.
To remove the units, we divide the spectral flux density of the object (@mymath{F}) by a reference spectral flux density (@mymath{F_r}).
We then define a logarithmic scale through the relation below and call it the @emph{magnitude}.
The @mymath{2.5} factor in the definition of magnitudes is a legacy of the our ancient colleagues and in particular Hipparchus of Nicaea (190120 BC).
+The @mymath{2.5} factor is also a legacy of our ancient origins: was necessary to match the used magnitude system of Hipparchus which was used extensively in the centuries after.
@dispmath{mm_r=2.5\log_{10} \left( F \over F_r \right)}
@noindent
@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the predefined magnitude of the reference brightness.
+@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the predefined magnitude of the reference spectral flux density.
For estimating the error in measuring a magnitude, see @ref{Quantifying measurement limits}.
The equation above is ultimately a relative relation.
@@ 29450,16 +29446,17 @@ To tie it to physical units, astronomers use the concept of a zero point which i
@cindex Zero point magnitude
@cindex Magnitude zero point
A unique situation in the magnitude equation above occurs when the reference spectral flux density is unity (@mymath{F_r=1}).
In other words, the increase in brightness that produces to an increment in the detector's native measurement units (usually referred to as ``analog to digital units'', or ADUs, also known as ``counts'').
The word ``increment'' was intentionally used because ADUs are discrete and measured as integer counts.
+In other words, the increase in spectral flux density that produces to an increment in the detector's native measurement units (usually referred to as ``analog to digital units'', or ADUs, also known as ``counts'').
+
+The word ``increment'' above is used intentionally: because ADUs are discrete and measured as integer counts.
In other words, a increase in spectral flux density that is below @mymath{F_r} will not be measured by the device.
The reference magnitde (@mymath{m_r}) that corresponds to @mymath{F_r} is known as the @emph{Zero point} magnitude of that image.
The increase in brightness (from an astrophysical source) that produces an increment in ADUs depends on all hardware and observational parameters that the image was taken in.
These include the quantum efficiency of the dector, the detector's coating, the transmission of the optical path, the filter transmission curve, the atmospheric absorption (for groundbased images; for example thin highaltitude clouds or at low altitudes) and etc.
The zero point therefore allows us to summarize all these ``observational'' (nonastrophysical) factors into a single number and compare different observations from different instruments at different times (critical to do science).
+The increase in spectral flux density (from an astrophysical source) that produces an increment in ADUs depends on all hardware and observational parameters that the image was taken in.
+These include the quantum efficiency of the dector, the detector's coating, the filter transmission curve, the transmission of the optical path, the atmospheric absorption (for groundbased images; for example observations at different altitudes from the horizon where the thickness of the atmosphere is different) and etc.
Using the zero point magnitude (@mymath{m_r=Z}), we can write the magnitude relation above in a simpler format (recall that @mymath{F_r=1}):
+The zero point therefore allows us to summarize all these ``observational'' (nonastrophysical) factors into a single number and compare different observations from different instruments at different times (critical to do science).
+Defining the zero point magnitude as @mymath{m_r=Z} in the magnitude equation, we can write it in simpler format (recall that @mymath{F_r=1}):
@dispmath{m = 2.5\log_{10}(F) + Z}
@@ 29467,14 +29464,13 @@ Using the zero point magnitude (@mymath{m_r=Z}), we can write the magnitude rela
@cindex Magnitude, AB
The zero point is found through comparison of measurements with predefined standards (in other words, it is a calibration of the pixel values).
Gnuastro has an installed script with a complete tutorial to estimate the zero point of any image, see @ref{Zero point estimation}.

Having the zero point of an image, you can convert its pixel values to the same physical units as the reference that the zero point was measured on.
Historically, the reference was defined to be measurements of the star Vega (with the same instrument and same environment), producing the @emph{vega magnitude} sysytem where the star Vega had a magnitude of zero (similar to the catalog of Hipparchus of Nicaea).
+Historically, the reference was defined to be measurements of the star Vega, producing the @emph{vega magnitude} system.
+In this system, the star Vega had a magnitude of zero (similar to the catalog of Hipparchus of Nicaea).
However, this caused many problems because Vega itself has its unique spectral features which are not in other stars and it is a variable star when measured precisely.
Therefore, based on previous efforts, in 1983 Oke & Gunn @url{https://ui.adsabs.harvard.edu/abs/1983ApJ...266..713O,proposed} the AB (absolute) magnitude system from accurate spectroscopy of Vega.
To avoid confusion with the ``absolute magnitude'' of a source (at a fixed distance), this magnitude system is always written as AB magnitude.
The equation below was defined such that a star with a flat spectra around @mymath{5480\AA} have a similar magnitude in the AB and Vegabased systems, where @mymath{F_\nu} is the frequencybased spectral flux density (in units of @mymath{erg/s/cm^2/Hz}):
+The AB magnitude zero point (when the input is frequency flux density; @mymath{F_\nu} with units of @mymath{erg/s/cm^2/Hz}) was defined such that a star with a flat spectra around @mymath{5480\AA} have a similar magnitude in the AB and Vegabased systems:
@dispmath{m_{AB} = 2.5\log_{10}(F_\nu) + 48.60}
@@ 29489,14 +29485,13 @@ $ astarithmetic sdss.fits 22.5 countstojy
@cartouche
@noindent
@strong{Verify the zero point usage in from new databases:} observational factors like the exposure time, the gain (how many electrons correspond to one ADU), telescope aperture, filter transmission curve and other factors are usually taken into account in the reduction pipeline that produces highlevel science products to provide a zero point that directly converts pixel values (in what ever units) to Janskys.
But some reduction pipelines may not account for some these for special reasons: for example not account for the gain or exposure time.
To avoid annoying strange results, when using a new database, verify that the zero points they provide directly converts pixel values to Janskys (is an AB magnitude zero point), or not.
If not, you can follow steps described below.
This information is usually in the documentation of the database.
+But some reduction pipelines may not account for some of these for special reasons: for example not account for the gain or exposure time.
+To avoid annoying strange results, when using a new database, verify (in the documentation of the database) that the zero points they provide directly converts pixel values to Janskys (is an AB magnitude zero point), or not.
+If they not, you need to apply corrections your self.
@end cartouche
Let's look at one example where the given zero point has not accounted for the exposure time (in other words it is only for a fixed exposure time: @mymath{Z_E}), but the pixel values (@mymath{p}) have been corrected for the exposure time.
One solution would be to first multiply the pixels by the exposure time, use that zero point and delete the temporary file.
+One solution would be to first multiply the pixels by the exposure time, use that zero point to get your desired measurement, and delete the temporary file.
But a more optimal way (in terms of storage, execution and clean code) would be to correct the zero point.
Let's take @mymath{t} to show time in units of seconds and @mymath{p_E} to be the pixel value that would be measured after the the fixed exposure time (in other words @mymath{p_E=p\times t}).
We then have the following:
@@ 29513,13 +29508,16 @@ From the properties of logarithms, we can then derive the correct zero point (@m
@cindex Celestial sphere
@cindex Surface brightness
@cindex SI (International System of Units)
An important concept is the distribution of an object's brightness over its area.
+The definition of magnitude above was for the total spectral flux density coming from an object (recall how we mentioned at the start of this section that the total energy of an object is calculated by summing all its pixels).
+The total flux is (mostly!) independent of the angular size of your pixels, so we didn't need to account for the pixel area.
+But when you want to study extended structures where the total magnitude is not desired (for example the substructure of a galaxy, or the brightness of the background sky), you need to report values that are independent of the area that total spectral flux density was measured on.
+
For this, we define the @emph{surface brightness} to be the magnitude of an object's brightness divided by its solid angle over the celestial sphere (or coverage in the sky, commonly in units of arcsec@mymath{^2}).
The solid angle is expressed in units of arcsec@mymath{^2} because astronomical targets are usually much smaller than one steradian.
Recall that the steradian is the dimensionless SI unit of a solid angle and 1 steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial sphere.
Surface brightness is therefore most commonly expressed in units of mag/arcsec@mymath{^2}.
For example, when the spectral flux density is measured over an area of A arcsec@mymath{^2}, then the surface brightness becomes:
+For example, when the spectral flux density is measured over an area of A arcsec@mymath{^2}, the surface brightness is calculated by:
@dispmath{S = 2.5\log_{10}(F/A) + Z = 2.5\log_{10}(F) + 2.5\log_{10}(A) + Z}
@@ 29533,14 +29531,13 @@ But this is wrong because magnitude is a logarithmic scale while area is linear.
It is the spectral flux density that should be divided by the solid angle because both have linear scales.
The magnitude of that ratio is then defined to be the surface brightness.
One usual application of this is to convert an image's pixel values to surface brightness, when you know its zero point.
Besides applications in catalogs and the resulting scientific analysis, converting pixels to surface brightness is usually a good way to display a FITS file in a publication!
See @ref{FITS images in a publication} for a fully working tutorial on how to do this.
@end table
@cartouche
@noindent
@strong{Do not warp or convolve magnitude or surface brightness images:} Warping an image involves calculating new pixel values (of the new pixel grid) from the old pixel values.
+@strong{Do not warp or convolve magnitude or surface brightness images:} Warping an image involves calculating new pixel values (of the new pixel grid) from the input grid's pixel values.
Convolution is also a process of finding the weighted mean of pixel values.
During these processes, many arithmetic operations are done on the original pixel values, for example, addition or multiplication.
However, @mymath{log_{10}(a+b)\ne log_{10}(a)+log_{10}(b)}.
From MAILERDAEMON Thu Aug 08 15:19:23 2024
Received: from list by lists.gnu.org with archive (Exim 4.90_1)
id 1sc8fT0003SnHb
for mharcgnuastrocommits@gnu.org; Thu, 08 Aug 2024 15:19:23 0400
Received: from eggs.gnu.org ([2001:470:142:3::10])
by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)
(Exim 4.90_1) (envelopefrom )
id 1sc8fS0003SbKX
for gnuastrocommits@gnu.org; Thu, 08 Aug 2024 15:19:22 0400
Received: from vcs2.savannah.gnu.org ([209.51.188.168])
by eggs.gnu.org with esmtp (Exim 4.90_1)
(envelopefrom ) id 1sc8fS0001Q0CD
for gnuastrocommits@gnu.org; Thu, 08 Aug 2024 15:19:22 0400
Received: by vcs2.savannah.gnu.org (Postfix, from userid 133179)
id C968DC1CB1D; Thu, 8 Aug 2024 15:19:21 0400 (EDT)
To: gnuastrocommits@gnu.org
Subject: [gnuastrocommits] master 45775c65: MakeCatalog: new meanerror
option and rewrite of std vs. std error
MIMEVersion: 1.0
ContentType: text/plain; charset=utf8
ContentTransferEncoding: 8bit
MailFollowupTo: gnuastrocommits@gnu.org,
Mohammad Akhlaghi
InReplyTo: <172314476119.28829.5774749848418263169@vcs2.savannah.gnu.org>
References: <172314476119.28829.5774749848418263169@vcs2.savannah.gnu.org>
XGitRepo: gnuastro
XGitRefname: refs/heads/master
XGitReftype: branch
XGitRev: 45775c651974736af82800f02f74c606f98de560
AutoSubmitted: autogenerated
MessageId: <20240808191921.C968DC1CB1D@vcs2.savannah.gnu.org>
Date: Thu, 8 Aug 2024 15:19:21 0400 (EDT)
From: Mohammad Akhlaghi
XBeenThere: gnuastrocommits@gnu.org
XMailmanVersion: 2.1.29
Precedence: list
ListId: Commits to the Gnuastro Git repository
ListUnsubscribe: ,
ListArchive:
ListPost:
ListHelp:
ListSubscribe: ,
XListReceivedDate: Thu, 08 Aug 2024 19:19:22 0000
branch: master
commit 45775c651974736af82800f02f74c606f98de560
Author: Mohammad Akhlaghi
Commit: Mohammad Akhlaghi
MakeCatalog: new meanerror option and rewrite of std vs. std error
Until now, the section that described the difference between standard
deviation and standard error was confusing because it called the "standard
error" as "error" (which is the other name of standard deviation when
dealing with a normal distribution). Also, MakeCatalog didn't have any
column to measure the standard error, and there was no way to tell
MakeCatalog to avoid pixel values when measuring the error columns (when
the standard deviation image already contains the signal sources of error).
With this commit, all these issues are addressed: the title of the section
of the book was edited to be more clear and the text was also fully edited
and rewritten where necessary. Also, the 'meanerror' option was added
to let people measure the standard error and the 'novalinerror' option
was defined so the pixel values are ignored when measuring errors.
The issue of the confusing text of that section in the book was raised by
Rahna Payyasseri Thanduparackal.

NEWS  11 ++++
bin/mkcatalog/args.h  29 +++++++++
bin/mkcatalog/columns.c  42 +++++++++++++
bin/mkcatalog/main.h  1 +
bin/mkcatalog/parse.c  36 ++++++
bin/mkcatalog/ui.c  5 +
bin/mkcatalog/ui.h  2 +
doc/gnuastro.texi  138 ++++++++++++++++++++++++++++
8 files changed, 181 insertions(+), 83 deletions()
diff git a/NEWS b/NEWS
index 808be250..3553709d 100644
 a/NEWS
+++ b/NEWS
@@ 6,6 +6,17 @@ See the end of the file for license conditions.
* Noteworthy changes in release X.XX (library XX.X.X) (YYYYMMDD)
** New publications
** New features
+*** MakeCatalog
+
+ novalinerror: ignore pixel in the values image when calculating the
+ variance of each label's pixel: only use the pixel's value in the
+ 'instd' image (which can also be the variance with the 'variance'
+ option). This is useful in scenarios where the STD/variance image is
+ not just the sky STD/Variance, but also has the signal's contribution.
+
+ meanerror: Measure the error of the mean for each label (object or
+ clump).
+
*** astscriptradialprofile
azimuth option can be called multiple times so the profile is generated
diff git a/bin/mkcatalog/args.h b/bin/mkcatalog/args.h
index 3cf1bd5f..9110d606 100644
 a/bin/mkcatalog/args.h
+++ b/bin/mkcatalog/args.h
@@ 162,6 +162,19 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_MANDATORY,
GAL_OPTIONS_NOT_SET
},
+ {
+ "novalinerror",
+ UI_KEY_NOVALINERRORS,
+ 0,
+ 0,
+ "Ignore pixel values when estimating errors.",
+ GAL_OPTIONS_GROUP_INPUT,
+ &p>novalinerror,
+ GAL_OPTIONS_NO_ARG_TYPE,
+ GAL_OPTIONS_RANGE_0_OR_1,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET
+ },
{
"forcereadstd",
UI_KEY_FORCEREADSTD,
@@ 1079,7 +1092,7 @@ struct argp_option program_options[] =
UI_KEY_SUMERROR,
0,
0,
 "Error (1sigma) in measuring sum.",
+ "Standard deviation (error) in measuring sum.",
UI_GROUP_COLUMNS_BRIGHTNESS,
0,
GAL_TYPE_INVALID,
@@ 1130,6 +1143,20 @@ struct argp_option program_options[] =
GAL_OPTIONS_NOT_SET,
ui_column_codes_ll
},
+ {
+ "meanerror",
+ UI_KEY_MEANERROR,
+ 0,
+ 0,
+ "Error of the mean in object/clump.",
+ UI_GROUP_COLUMNS_BRIGHTNESS,
+ 0,
+ GAL_TYPE_INVALID,
+ GAL_OPTIONS_RANGE_ANY,
+ GAL_OPTIONS_NOT_MANDATORY,
+ GAL_OPTIONS_NOT_SET,
+ ui_column_codes_ll
+ },
{
"std",
UI_KEY_STD,
diff git a/bin/mkcatalog/columns.c b/bin/mkcatalog/columns.c
index 6e47ac54..b0dfb806 100644
 a/bin/mkcatalog/columns.c
+++ b/bin/mkcatalog/columns.c
@@ 1260,7 +1260,7 @@ columns_define_alloc(struct mkcatalogparams *p)
case UI_KEY_SUMERROR:
name = "SUM_ERROR";
unit = MKCATALOG_NO_UNIT;
 ocomment = "Error (1sigma) in measuring sum.";
+ ocomment = "Standard deviation (error) of in measuring sum.";
ccomment = ocomment;
otype = GAL_TYPE_FLOAT32;
ctype = GAL_TYPE_FLOAT32;
@@ 1318,6 +1318,23 @@ columns_define_alloc(struct mkcatalogparams *p)
ciflag[ CCOL_RIV_SUM ] = 1;
break;
+ case UI_KEY_MEANERROR:
+ name = "MEAN_ERROR";
+ unit = MKCATALOG_NO_UNIT;
+ ocomment = "Error of mean of sky subtracted values";
+ ccomment = ocomment;
+ otype = GAL_TYPE_FLOAT32;
+ ctype = GAL_TYPE_FLOAT32;
+ disp_fmt = GAL_TABLE_DISPLAY_FMT_GENERAL;
+ disp_width = 10;
+ disp_precision = 5;
+ oiflag[ OCOL_NUM ] = ciflag[ CCOL_NUM ] = 1;
+ oiflag[ OCOL_SUM_VAR ] = ciflag[ CCOL_SUM_VAR ] = 1;
+ oiflag[ OCOL_SUM_VAR_NUM ] = ciflag[ CCOL_SUM_VAR_NUM ] = 1;
+ ciflag[ CCOL_RIV_NUM ] = 1;
+ ciflag[ CCOL_RIV_SUM_VAR ] = 1;
+ break;
+
case UI_KEY_STD:
name = "STD";
unit = MKCATALOG_NO_UNIT;
@@ 2365,7 +2382,7 @@ columns_define_alloc(struct mkcatalogparams *p)
to find variance and number of pixels used to find brightness are the
same). */
static double
columns_sum_error(struct mkcatalogparams *p, double *row, int o0c1)
+columns_sum_std(struct mkcatalogparams *p, double *row, int o0c1)
{
size_t numind = o0c1 ? CCOL_NUM : OCOL_NUM;
double V = row[ o0c1 ? CCOL_SUM_VAR : OCOL_SUM_VAR ];
@@ 2397,7 +2414,7 @@ columns_sn(struct mkcatalogparams *p, double *row, int o0c1)
: 0.0 );
/* Return the derived value. */
 return (IO) / columns_sum_error(p, row, o0c1);
+ return (IO) / columns_sum_std(p, row, o0c1);
}
@@ 2932,7 +2949,7 @@ columns_fill(struct mkcatalog_passparams *pp)
break;
case UI_KEY_SUMERROR:
 ((float *)colarr)[oind] = columns_sum_error(p, oi, 0);
+ ((float *)colarr)[oind] = columns_sum_std(p, oi, 0);
break;
case UI_KEY_CLUMPSSUM:
@@ 2947,6 +2964,15 @@ columns_fill(struct mkcatalog_passparams *pp)
: NAN );
break;
+ /* In 'columns_sum_std', we take the square root of the sum of
+ variances. So here, we just need to divide by the total number
+ of elements used: since the mean is the sum/number and number is
+ fixed, so e(mean)=e(sum)/number. */
+ case UI_KEY_MEANERROR:
+ ((float *)colarr)[oind] = ( columns_sum_std(p, oi, 0)
+ / oi[OCOL_NUM] );
+ break;
+
case UI_KEY_STD:
((float *)colarr)[oind] =
gal_statistics_std_from_sums(oi[ OCOL_SUM ], oi[ OCOL_SUMP2 ],
@@ 3331,7 +3357,7 @@ columns_fill(struct mkcatalog_passparams *pp)
break;
case UI_KEY_SUMERROR:
 ((float *)colarr)[cind] = columns_sum_error(p, ci, 1);
+ ((float *)colarr)[cind] = columns_sum_std(p, ci, 1);
break;
case UI_KEY_SUMNORIVER:
@@ 3344,6 +3370,12 @@ columns_fill(struct mkcatalog_passparams *pp)
/ci[CCOL_NUM] );
break;
+ case UI_KEY_MEANERROR:
+ ((float *)colarr)[cind] = ( columns_sum_std(p, ci, 1)
+ / ( oi[CCOL_NUM]
+ + oi[CCOL_RIV_NUM] ) );
+ break;
+
case UI_KEY_STD:
((float *)colarr)[cind] =
gal_statistics_std_from_sums(oi[ CCOL_SUM ],
diff git a/bin/mkcatalog/main.h b/bin/mkcatalog/main.h
index 221898f9..976e9847 100644
 a/bin/mkcatalog/main.h
+++ b/bin/mkcatalog/main.h
@@ 262,6 +262,7 @@ struct mkcatalogparams
uint8_t noclumpsort; /* Don't sort the clumps catalog. */
float zeropoint; /* Zeropoint magnitude of object. */
uint8_t variance; /* Input STD file is actually variance. */
+ uint8_t novalinerror; /* Ignore values when estimating errors.*/
uint8_t forcereadstd; /* Read STD even if not needed. */
uint8_t subtractsky; /* ==1: subtract the Sky from values. */
float sfmagnsigma; /* Surface brightness multiple of sigma.*/
diff git a/bin/mkcatalog/parse.c b/bin/mkcatalog/parse.c
index 25a00269..ede28844 100644
 a/bin/mkcatalog/parse.c
+++ b/bin/mkcatalog/parse.c
@@ 169,11 +169,12 @@ parse_vector_dim3(struct mkcatalog_passparams *pp, gal_data_t *xybin)
double var;
int needsvar;
gal_data_t *vector=pp>vector;
+ uint8_t vine=!p>novalinerror;
float *std=p>std?p>std>array:NULL;
size_t c[3], *dsize=p>objects>dsize;
size_t sind=0, pind=0, num_increment=1;
+ float sval, *st_v, *st_std, *V=NULL, *ST=NULL;
uint8_t *xybinarr = xybin ? xybin>array : NULL;
 float st, sval, *st_v, *st_std, *V=NULL, *ST=NULL;
int32_t *st_o, *O, *OO, *objarr=p>objects>array;
size_t tid, *tsize, increment=0, start_end_inc[2], ndim=p>objects>ndim;
@@ 193,7 +194,8 @@ parse_vector_dim3(struct mkcatalog_passparams *pp, gal_data_t *xybin)
/* Prepare the parsing information. Also, if tileid isn't necessary, set
'tid' to a blank value to cause a crash with a mistake. */
 tsize=parse_vector_dim3_prepare(pp, start_end_inc, &st_o, &st_v, &st_std);
+ tsize=parse_vector_dim3_prepare(pp, start_end_inc, &st_o, &st_v,
+ &st_std);
tid = (p>std && p>std>size>1 && st_std == NULL)?0:GAL_BLANK_SIZE_T;
/* Check if we need the variance. */
@@ 242,8 +244,7 @@ parse_vector_dim3(struct mkcatalog_passparams *pp, gal_data_t *xybin)
we are given a variance dataset already, there is no
need to use 'st*st', we can directly use 'sval'. */
sval = st_std ? *ST : (p>std>size>1?std[tid]:std[0]);
 st = p>variance ? sqrt(sval) : sval;
 var = (p>variance ? sval : st*st) + fabs(*V);
+ var = (p>variance ? sval : sval*sval) + (vine?*V:0);
}
else var = NAN;
@@ 325,11 +326,11 @@ parse_objects(struct mkcatalog_passparams *pp)
double *oi=pp>oi;
gal_data_t *xybin=NULL;
size_t *tsize=pp>tile>dsize;
 uint8_t *u, *uf, goodvalue, *xybinarr=NULL;
double minima_v=FLT_MAX, maxima_v=FLT_MAX;
size_t d, pind=0, increment=0, num_increment=1;
int32_t *O, *OO, *C=NULL, *objarr=p>objects>array;
 float var, sval, varval, skyval, *V=NULL, *SK=NULL, *ST=NULL;
+ float var, sval, skyval, *V=NULL, *SK=NULL, *ST=NULL;
+ uint8_t *u, *uf, goodvalue, vine=!p>novalinerror, *xybinarr=NULL;
float *std=p>std?p>std>array:NULL, *sky=p>sky?p>sky>array:NULL;
/* If tile processing isn't necessary, set 'tid' to a blank value. */
@@ 589,17 +590,13 @@ parse_objects(struct mkcatalog_passparams *pp)
standard deviation of the signal (independent of the
sky) is 'sqrt(*V)'. Therefore the total variance of
this pixel is the variance of the sky added with the
 absolute value of its skysubtracted flux. We use the
 absolute value, because especially as the signal gets
 noisy there will be negative values, and we don't want
 them to decrease the variance. */
+ absolute value of its skysubtracted flux. */
if(oif[ OCOL_SUM_VAR ] && goodvalue)
{
 varval=p>variance ? var : sval;
 if(!isnan(varval))
+ if(!isnan(var))
{
oi[ OCOL_SUM_VAR_NUM ]++;
 oi[ OCOL_SUM_VAR ] += varval + fabs(*V);
+ oi[ OCOL_SUM_VAR ] += var + (vine?*V:0);
}
}
}
@@ 725,13 +722,14 @@ parse_clumps(struct mkcatalog_passparams *pp)
double *ci, *cir;
gal_data_t *xybin=NULL;
+ uint8_t vine=!p>novalinerror;
int32_t *O, *OO, *C=NULL, nlab;
size_t cind, *tsize=pp>tile>dsize;
double *minima_v=NULL, *maxima_v=NULL;
uint8_t *u, *uf, goodvalue, *cif=p>ciflag;
size_t nngb=gal_dimension_num_neighbors(ndim);
size_t i, ii, d, pind=0, increment=0, num_increment=1;
 float var, sval, varval, skyval, *V=NULL, *SK=NULL, *ST=NULL;
+ float var, sval, skyval, *V=NULL, *SK=NULL, *ST=NULL;
int32_t *objects=p>objects>array, *clumps=p>clumps>array;
float *std=p>std?p>std>array:NULL, *sky=p>sky?p>sky>array:NULL;
@@ 995,11 +993,10 @@ parse_clumps(struct mkcatalog_passparams *pp)
}
if(cif[ CCOL_SUM_VAR ] && goodvalue)
{
 varval=p>variance ? var : sval;
 if(!isnan(varval))
+ if(!isnan(var))
{
ci[ CCOL_SUM_VAR_NUM ]++;
 ci[ CCOL_SUM_VAR ] += varval + fabs(*V);
+ ci[ CCOL_SUM_VAR ] += var+(vine?*V:0);
}
}
}
@@ 1077,8 +1074,9 @@ parse_clumps(struct mkcatalog_passparams *pp)
: ( p>std>size>1
? std[tid]
: std[0] ) );
 cir[ CCOL_RIV_SUM_VAR ] += fabs(*V)
 + (p>variance ? sval : sval*sval);
+ cir[ CCOL_RIV_SUM_VAR ] +=
+ (p>variance ? sval : sval*sval)
+ + (vine?*V:0);
}
}
}
diff git a/bin/mkcatalog/ui.c b/bin/mkcatalog/ui.c
index f6517ccb..c7c2c4e5 100644
 a/bin/mkcatalog/ui.c
+++ b/bin/mkcatalog/ui.c
@@ 382,7 +382,7 @@ ui_check_only_options(struct mkcatalogparams *p)
"If you want the upperlimit check table for an object, only "
"give one value (the object's label) to 'checkuplim'.");
 /* See if 'skyin' is a filename or a value. When the string is ONLY a
+ /* See if 'insky' is a filename or a value. When the string is ONLY a
number (and nothing else), 'tailptr' will point to the end of the
string ('\0'). */
if(p>skyfile)
@@ 1981,6 +1981,9 @@ ui_read_check_inputs_setup(int argc, char *argv[],
if(p>subtractsky)
printf("  Sky has been subtracted from values "
"internally.\n");
+ else
+ printf("  Sky NOT SUBTRACTED from values, only used "
+ "in needed columns.\n");
}
if(p>std)
diff git a/bin/mkcatalog/ui.h b/bin/mkcatalog/ui.h
index c511689b..b092862c 100644
 a/bin/mkcatalog/ui.h
+++ b/bin/mkcatalog/ui.h
@@ 84,6 +84,7 @@ enum option_keys_enum
UI_KEY_ZEROPOINT,
UI_KEY_SIGMACLIP,
UI_KEY_VARIANCE,
+ UI_KEY_NOVALINERRORS,
UI_KEY_SUBTRACTSKY,
UI_KEY_SFMAGNSIGMA,
UI_KEY_SFMAGAREA,
@@ 153,6 +154,7 @@ enum option_keys_enum
UI_KEY_SUMNORIVER,
UI_KEY_STD,
UI_KEY_MEAN,
+ UI_KEY_MEANERROR,
UI_KEY_MEDIAN,
UI_KEY_MAXIMUM,
UI_KEY_MAGNITUDEERROR,
diff git a/doc/gnuastro.texi b/doc/gnuastro.texi
index 21505c7b..ea899289 100644
 a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ 696,7 +696,7 @@ MakeCatalog
Quantifying measurement limits
* Standard deviation vs error:: The std is not a measure of the error.
+* Standard deviation vs Standard error:: Differnece between these important measures.
* Magnitude measurement error of each detection:: Error in measuring magnitude.
* Surface brightness error of each detection:: Error in measuring the Surface brightness.
* Completeness limit of each detection:: Possibility of detecting similar objects?
@@ 29371,8 +29371,10 @@ It might even be so intertwined with its processing, that adding new columns mig
@node Brightness flux magnitude, Quantifying measurement limits, Detection and catalog production, MakeCatalog
@subsection Brightness, Flux, Magnitude and Surface brightness
After taking an image with your smartphone camera (or a large telescope), the value in each pixel of the output file is a proxy for the amount of @emph{energy} that accumulated in it (while it was exposed to light).
In astrophysics, the centimetre–gram–second (cgi) units are commonly used so energy is commonly reported in units of @mymath{erg}.
+@cindex cgs units (centimeter–gram–second)
+@cindex centimeter–gram–second (cgs) units
+After taking an image with your camera (or a large telescope), the value in each pixel of the output file is a proxy for the amount of @emph{energy} that accumulated in it (while it was exposed to light).
+In astrophysics, the centimeter–gram–second (cgs) units are commonly used so energy is commonly reported in units of @mymath{erg}.
In an image, the energy of a galaxy for example will be distributed over many pixels.
Therefore, the collected energy of an object in the image is the total sum of the values in the pixels associated to the object.
@@ 29386,8 +29388,8 @@ To be able to compare with data taken with different exposure times, we define t
@item Flux (@mymath{erg/s/cm^2})
@cindex Flux
To be able to compare with data from different telescope collecting areas, we define the @emph{flux} which is defined in units of brightness per collectingarea.
This area is historically reported in units of @mymath{cm^2}.
+To be able to compare with data from different telescopes (with differnet collecting areas), we define the @emph{flux} which is defined in units of brightness per collectingarea.
+Because we are using the cgs units, the collecting area is reported in @mymath{cm^2}.
@cindex Luminosity
Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
@@ 29405,11 +29407,11 @@ To take into account the spectral coverage of our data, we define the @emph{spec
Like other objects in nature, astronomical objects do not emit or reflect the same flux at all wavelengths.
On the other hand, our detector techologies are different for different wavelength ranges.
Therefore, even if we wanted to, there is no way to measure the ``total'' (at all wavelengths) brightness of an object with a single tool.
To be able to analyze objects with different spectral features, it is therefore important to account for the wavelength (or frequency) range of the photons that we measured through the spectral flux density.
+To be able to analyze objects with different spectral features (compare measurements of the same object taken in different spectral regimes), it is therefore important to account for the wavelength (or frequency) range of the photons that we measured through the spectral flux density.
@item Jansky (@mymath{10^{23}erg/s/cm^2/Hz})
@cindex Jansky (Jy)
A ``Jansky'' is a certain value of frequency flux density (commonly used in radio astronomy).
+A ``Jansky'' is a certain level of frequency flux density (commonly used in radio astronomy).
Janskys can be converted to wavelength flux density using the @code{jytowavelengthfluxdensity} operator of Gnuastro's Arithmetic program, see the derivation under this operator's description in @ref{Unit conversion operators}.
@end table
@@ 29568,7 +29570,7 @@ In astronomy, it is common to use the magnitude (a unitless scale) and physical
Therefore the measurements discussed here are commonly used in units of magnitudes.
@menu
* Standard deviation vs error:: The std is not a measure of the error.
+* Standard deviation vs Standard error:: Differnece between these important measures.
* Magnitude measurement error of each detection:: Error in measuring magnitude.
* Surface brightness error of each detection:: Error in measuring the Surface brightness.
* Completeness limit of each detection:: Possibility of detecting similar objects?
@@ 29578,14 +29580,19 @@ Therefore the measurements discussed here are commonly used in units of magnitud
* Upper limit surface brightness of image:: Measure the noiselevel for a certain aperture.
@end menu
@node Standard deviation vs error, Magnitude measurement error of each detection, Quantifying measurement limits, Quantifying measurement limits
@subsubsection Standard deviation vs error
The error and the standard deviation are sometimes confused with each other.
Therefore, before continuing with the various measurement limits below, let's review these two fundamental concepts.
Instead of going into the theoretical definitions of the two (which you can see in their respective Wikipedia pages), we'll discuss the concepts in a handson and practical way here.
+@node Standard deviation vs Standard error, Magnitude measurement error of each detection, Quantifying measurement limits, Quantifying measurement limits
+@subsubsection Standard deviation vs Standard error
+The standard deviation and standard error are different concepts to convey different aspects of a measurement, but they can be easily confused with each other: for example, the standard deviation is also called as the ``error''.
+A nice description of this difference is given in the following quote from Wikipedia.
+In this section, we'll show the concept described in this quote with a handson and practical set of commands to clarify this important distinction.
Let's simulate an observation of the sky, but without any astronomical sources!
In other words, we only have a background flux level (from the sky emission).
+@quotation
+The standard deviation of the sample data is a description of the variation in measurements, while the standard error of the mean is a probabilistic statement about how the sample size will provide a better bound on estimates of the population mean, in light of the central limit theorem. Put simply, the @emph{standard error} of the sample mean is an estimate of how far the sample mean is likely to be from the population mean, whereas the @emph{standard deviation} of the sample is the deg [...]
+@author Wikipedia page on ``Standard error'' (retrieved on 20240809)
+@end quotation
+
+Let's simulate an observation of the sky, but without any astronomical sources to simplify the logic.
+In other words, we only have a background flux level (assume you want to measure the brightness of the twilight sky that is not yet faint enough to let stars be visible).
With the first command below, let's make an image called @file{1.fits} that contains @mymath{200\times200} pixels that are filled with random noise from a Poisson distribution with a mean of 100 counts (the flux from the background sky).
With the second command, we'll have a look at the image.
Recall that the Poisson distribution is equal to a normal distribution for large mean values (as in this case).
@@ 29598,11 +29605,11 @@ $ astscriptfitsview 1.fits
@end example
The standard deviation (@mymath{\sigma}) of the Poisson distribution is the square root of the mean, see @ref{Photon counting noise}.
Note that due to the random nature of the noise, the values reported in the next steps on your computer will be very slightly different.
+Note that due to the random nature of the noise, the values reported in the next steps on your computer will be slightly different (which is one of the points of this section!).
To reproducible exactly the same values in different runs, see @ref{Generating random numbers}, and for more on the first command, see @ref{Arithmetic}.
Each pixel shows the result of one sampling from the Poisson distribution.
In other words, assuming the sky emission in our simulation is constant over our field of view, each pixel's value shows one measurement of the sky emission.
+In other words, assuming the sky emission in our simulation is constant over our field of view, each pixel's value shows one measurement of the same sky emission.
Statistically speaking, a ``measurement'' is a sampling from an underlying distribution of values.
Through our measurements, we aim to identify that underlying distribution (the ``truth'')!
With the command below, let's look at the pixel statistics of @file{1.fits} (output is shown immediately under it).
@@ 29638,12 +29645,15 @@ Histogram:
@end example
As expected, you see that the ASCII histogram nicely resembles a normal distribution.
The measured mean and standard deviation (@mymath{\sigma_x}) are also very similar to the input (mean of 100, standard deviation of @mymath{\sigma=10}).
+The measured mean and standard deviation are also very similar to the input (mean of 100, standard deviation of 10).
But the measured mean (and standard deviation) aren't exactly equal to the input!
Every time we make a different simulated image from the same distribution, the measured mean and standard deviation will slightly differ.
With the second command below, let's build 500 images like above and measure their mean and standard deviation.
The outputs will be written into a file (@file{meanstds.txt}; in the first command we are deleting it to make sure we write into an empty file within the loop).
+Run the commands above one more time (this time calling the output @file{2.fits}) and check for your self (actually open the two FITS images and check visually, don't just rely on the statistics).
+
+Now that you have a good feeling of the change, let's automate this and scale it up for some nice statistics.
+With the commands below, you will build 500 images like above and measure their mean and standard deviation and save each measurment into a file (@file{meanstds.txt}.
+In the first command we are deleting it to make sure we write into an empty file within the first run of the loop.
With the third command, let's view the top 10 rows:
@example
@@ 29668,7 +29678,8 @@ $ asttable meanstds.txt Y head=10
100.000212 9.970293
@end example
From this table, you see that each simulation has produced a slightly different measured mean and measured standard deviation (@mymath{\sigma_x}) that are just fluctuating around the input mean (which was 100) and input standard deviation (@mymath{\sigma=10}).
+From this table, you see that each simulation has produced a slightly different measured mean and measured standard deviation.
+They are just fluctuating around the input mean (which was 100) and input standard deviation (which was 10).
Let's have a look at the distribution of mean measurements:
@example
@@ 29703,48 +29714,46 @@ Histogram:
@end example
@cindex Standard error of mean
The standard deviation of the various mean measurements above shows the scatter in measuring the mean with an image of this size from this underlying distribution.
This is therefore defined as the @emph{standard error of the mean}, or ``error'' for short (since most measurements are actually the mean of a population) and shown with @mymath{\widehat\sigma_{\bar{x}}}.
+The standard deviation you see above (approximately @mymath{0.05}) shows the scatter in measuring the mean with an image of this size and is different from the standard deviation of the Poisson distribution that the values were drawn from.
+This is therefore defined as the @emph{standard error of the mean}, or ``standard error'' for short (since most measurements are actually the mean of a population).
From the example above, you see that the error is smaller than the standard deviation (smaller when you have a larger sample).
In fact, @url{https://en.wikipedia.org/wiki/Standard_error#Derivation, it can be shown} that this ``error of the mean'' (@mymath{\sigma_{\bar{x}}}) is related to the distribution standard deviation (@mymath{\sigma}) through the following equation.
Where @mymath{N} is the number of points used to measure the mean in one sample (@mymath{200\times200=40000} in this case).
+In fact, @url{https://en.wikipedia.org/wiki/Standard_error#Derivation, it can be shown} that this ``error of the mean'' (@mymath{\sigma_{\bar{x}}}; recall that @mymath{\bar{x}} represents the mean) is related to the distribution standard deviation (@mymath{\sigma}) through the following equation.
+Where @mymath{N} is the number of points used to measure the mean in one sample (@mymath{N=200\times200=40000} in this case).
Note that the @mymath{10.0} below was reported as ``standard deviation'' in the first run of @code{aststatistics} on @file{1.fits} above):
@c The 10.0 depends on the 'aststatistics 1.fits' command above.
@dispmath{\sigma_{\bar{x}}=\frac{\sigma}{\sqrt{N}} \quad\quad {\rm or} \quad\quad \widehat\sigma_{\bar{x}}\approx\frac{\sigma_x}{\sqrt{N}} = \frac{10.0}{200} = 0.05}
+@dispmath{\sigma_{\bar{x}}\approx\frac{\sigma}{\sqrt{N}} = \frac{10.0}{200} = 0.05}
@noindent
Taking the considerations above into account, we should clearly distinguish the following concepts when talking about the standard deviation or error:
+Therefore the standard error of the mean is directly related to the number of pixels you used to measure the mean
+You can test this by changing the @code{200}s in the commands above to smaller or larger values.
+As you make larger and larger images, you will be able to measure the mean much more precisely (the standard error of the mean will go to zero).
+But no matter how many pixels you use, the standard deviation will always be the same.
+Within MakeCatalog, the options related to dispersion/error in the measurements have the following conventions:
@table @asis
@item Standard deviation of population
This is the standard deviation of the underlying distribution (10 in the example above), and shown by @mymath{\sigma}.
This is something you can never measure, and is just the ideal value.

@item Standard deviation of mean
Ideal error of measuring the mean (assuming we know @mymath{\sigma}).

@item Standard deviation of sample (i.e., @emph{Standard deviation})
Measured Standard deviation from a sampling of the ideal distribution.
This is the second column of @file{meanstds.txt} above and is shown with @mymath{\sigma_x} above.
In astronomical literature, this is simply referred to as the ``standard deviation''.

In other words, the standard deviation is computed on the input itself and MakeCatalog just needs a ``values'' file.
For example, when measuring the standard deviation of an astronomical object using MakeCatalog it is computed directly from the input values.

@item Standard error (i.e., @emph{error})
Measurable scatter of measuring the mean (@mymath{\widehat\sigma_{\bar{x}}}) that can be estimated from the size of the sample and the measured standard deviation (@mymath{\sigma_x}).
In astronomical literature, this is simply referred to as the ``error''.

In other words, when asking for an ``error'' measurement with MakeCatalog, a separate standard deviation dataset should be always provided.
This dataset should take into account all sources of scatter.
For example, during the reduction of an image, the standard deviation dataset should take into account the dispersion of each pixel that comes from the bias, dark, flat fielding, etc.
If this image is not available, it is possible to use the @code{SKY_STD} extension from NoiseChisel as an estimation.
For more see @ref{NoiseChisel output}.
+@item @option{*std}
+For example @option{std} or @option{sigclipstd}.
+These return the standard deviation of the values within a label.
+If the underlying object (in the noise) is flat, then this will be the @option{\sigma} that is mentioned above.
+However, no object in astronomy in flat!
+So this option should be used with extreme care!
+It only makes sense in special contexts like measuring the radial profile where we assume that the values at a certain radius have the same flux (see @ref{Generate radial profile}).
+
+@item @option{*error}
+For example @option{magerror}, @option{meanerror} or @option{sumerror}.
+These options should be used when when pixel values are different.
+When the pixels do not have the same value (for example different parts of one galaxy), their standard deviation is meaningless.
+To measure the total error in such cases, we need to know the standard deviation of each pixel separately.
+Therefore, for these columns, MakeCatalog needs a separate dataset that contains the underlying sky standard deviation for those pixels.
+That dataset should have the same size (number and dimension of pixels) as the values dataset.
+You can use @ref{NoiseChisel} to generate such an image.
+
+If the underlying profile and sky standard deviations is flat, then @option{sumerror} will be the standard deviation that we discussed in this section and @option{meanerror} will be the standard error.
+When the values are different, the combined error is calculated by adding the variances (second power of the standard deviation) of each pixel, added with its value.
+When the values are smaller than one a correction is applied (that is defined in Section 3.3 of Akhlaghi and Ichikawa @url{https://arxiv.org/abs/1505.01664, 2015}).
@end table
@node Magnitude measurement error of each detection, Surface brightness error of each detection, Standard deviation vs error, Quantifying measurement limits
+@node Magnitude measurement error of each detection, Surface brightness error of each detection, Standard deviation vs Standard error, Quantifying measurement limits
@subsubsection Magnitude measurement error of each detection
The raw error in measuring the magnitude is only meaningful when the object's magnitude is brighter than the upperlimit magnitude (see below).
As discussed in @ref{Brightness flux magnitude}, the magnitude (@mymath{M}) of an object with brightness @mymath{B} and zero point magnitude @mymath{z} can be written as:
@@ 30415,6 +30424,7 @@ Within an image, pixels have both a position and a value.
In the sections above all the measurements involved position (see @ref{Position measurements in pixels} or @ref{Position measurements in WCS}).
The measurements in this section only deal with pixel values and ignore the pixel positions completely.
In other words, for the options of this section each labeled region within the input is just a group of values (and their associated error values if necessary), and they let you do various types of measurements on the resulting distribution of values.
+For more on the difference between the @option{*error} or @option{*std} columns see @ref{Standard deviation vs Standard error}.
@table @option
@item sum
@@ 30426,8 +30436,14 @@ So the sum of all the clumpsums in the clump catalog of one object will be smal
If no usable pixels are present over the clump or object (for example, they are all blank), the returned value will be NaN (note that zero is meaningful).
@item sumerror
The error in measuring the sum of values of a label (objects or clumps).
+The standard deviation of the sum of values of a label (objects or clumps).
The value is calculated by using the values image (for signal above the sky level) and the sky standard deviation image (extension @option{stdhdu} of file given to @option{instd}); which you can derive for any image using @ref{NoiseChisel}.
+This column is internally used to measure the signaltonoise (@option{sn}).
+
+For objects this is calculated by adding the sky variance (second power of the sky standard deviation image) of each pixel in the label, with the value of the pixel if the value is not negative (error only increases).
+This is done to account for brighter pixels which have higher noise in the Poisson distribution (its side effect is that the error will always be slightly overestimated due to the positive values close to the noise).
+A correction may be applied if the sky standard deviation is negative; see Section 3.3 of Akhlaghi & Ichikawa @url{https://arxiv.org/abs/1505.01664, 2015}.
+For clumps, the variance of the rivers (which are subtracted from the value of pixels in calclulating the sum) are also added to generate the final standard deviation.
The returned value will be NaN when the label covers only NaN pixels in the values image, or a pixel is NaN in the @option{instd} image, but nonNaN in the values image.
The latter situation usually happens when there is a bug in the previous steps of your analysis.
@@ 30450,6 +30466,10 @@ If no usable pixels are present over the clump or object (for example, they are
The mean sky subtracted value of pixels within the object or clump.
For clumps, the average river flux is subtracted from the sky subtracted mean.
+@item meanerror
+The error in measuring the mean; using both the values file and the sky standard deviation image.
+In case the given standard deviation or variance image already contains the contributions from the pixel values (it is not just the sky standard deviation), use @option{novalinerror}).
+
@item std
The standard deviation of the pixels within the object or clump.
For clumps, the river pixels are not subtracted because that is a constant (per pixel) value and should not affect the standard deviation.
@@ 30957,7 +30977,7 @@ For the full list of available measurements, see @ref{MakeCatalog measurements}.
The ``values'' dataset is used for measurements like brightness/magnitude, or fluxweighted positions.
If it is a real image, by default it is assumed to be already Skysubtracted prior to running MakeCatalog.
If it is not, you use the @option{subtractsky} option to, so MakeCatalog reads and subtracts the Sky dataset before any processing.
+If it is not, you should use the @option{subtractsky} option so MakeCatalog reads and subtracts the Sky dataset before any processing.
To obtain the Sky value, you can use the @option{sky} option of @ref{Statistics}, but the best recommended method is @ref{NoiseChisel}, see @ref{Sky value}.
MakeCatalog can also do measurements on substructures of detections.
@@ 31044,7 +31064,11 @@ Therefore if the input standard deviation (or variance) image also contains the
The HDU of the Sky value standard deviation image.
@item variance
The dataset given to @option{instd} (and @option{stdhdu} has the Sky variance of every pixel, not the Sky standard deviation.
+The value/file given to @option{instd} (and @option{stdhdu} has the Sky variance of every pixel, not the Sky standard deviation.
+
+@item novalinerror
+The value/file given to @option{instd} is not just due to the sky (noise), but also contains the contribution of the signal to each pixel's standard deviation or variance.
+If this option is given, the pixel values will be ignored when measuring the @option{*error} columns.
@item forcereadstd
Read the input STD image even if it is not required by any of the requested columns.