gnuastro-commits
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[gnuastro-commits] master 1536b0d: MakeCatalog: reported surface brightn


From: Mohammad Akhlaghi
Subject: [gnuastro-commits] master 1536b0d: MakeCatalog: reported surface brightness limit accounting for area
Date: Sun, 21 Mar 2021 22:07:28 -0400 (EDT)

branch: master
commit 1536b0da0c52a8b1aa09f42ebe4bdaa3ec3742f8
Author: Mohammad Akhlaghi <mohammad@akhlaghi.org>
Commit: Mohammad Akhlaghi <mohammad@akhlaghi.org>

    MakeCatalog: reported surface brightness limit accounting for area
    
    Until now the surface brightness limit reported in MakeCatalog's output
    headers didn't account for the area that it was calculated over (value of
    the '--sfmagarea' option). The equation of the surface brightness limit
    within the book also had a similar problem.
    
    With this commit, both these issues have been corrected in the source of
    MakeCatalog and in the book. In particular a new "Image surface brightness
    limit" section has been added to the third tutorial to discuss the concept
    with real examples on the data. In process many other necessary corrections
    were made as listed below.
    
     - Book: the "Brightness, flux and magnitude" sub-section was brought to
       the MakeCatalog section. Until now, it was under the MakeProfiles
       section.
    
     - MakeCatalog now has an extra '--upperlimitsb' measurement which will
       return the upper-limit surface brightness level for the labeled object.
    
     - MakeCatalog writes the calculated surface brighntess values as FITS
       keywords into the output. Until now, it was in the form of
       'COMMENTS'. In order to do this, some major re-structuring of the
       relevant functions was necessary.
    
     - In the library, I noticed that the 'txt_write_keys' function didn't have
       a working scenario for full comment lines so it was added.
---
 NEWS                       |  24 ++
 bin/mkcatalog/args.h       |  14 +
 bin/mkcatalog/columns.c    |  27 ++
 bin/mkcatalog/mkcatalog.c  | 331 ++++++++---------
 bin/mkcatalog/mkcatalog.h  |  10 +-
 bin/mkcatalog/ui.c         |  18 +-
 bin/mkcatalog/ui.h         |   1 +
 bin/mkcatalog/upperlimit.c | 212 +++++------
 bin/mkcatalog/upperlimit.h |   4 +-
 doc/gnuastro.texi          | 886 ++++++++++++++++++++++++++++++---------------
 lib/fits.c                 |   2 +-
 lib/txt.c                  |   5 +
 12 files changed, 944 insertions(+), 590 deletions(-)

diff --git a/NEWS b/NEWS
index d17b68e..f291113 100644
--- a/NEWS
+++ b/NEWS
@@ -22,6 +22,12 @@ See the end of the file for license conditions.
      with the profile's value in one column and any requested measure in
      the other columns (any MakeCatalog measurement is possible).
 
+  Book:
+   - New "Image surface brightness limit" section added to the third
+     tutorial (on "Detecting large extended targets"). It describes the
+     different ways to measure a dataset's surface brightness limit and
+     upper-limit surface brightness level on the dataset of this tutorial.
+
   Arithmetic:
    - New operators (all also available in Table's column arithmetic):
      - sin: Trigonometric sine (input in degrees).
@@ -40,6 +46,13 @@ See the end of the file for license conditions.
      - counts-to-jy: Convert counts to Janskys through a zero point based
           on AB magnitudes.
 
+  MakeCatalog:
+   - Newly added measurement columns:
+     --upperlimitsb: upper-limit surface brightness for the given label
+       (object or clump). This is useful for measuring a dataset's
+       realistic surface-brightness limit (not extrapolated like the
+       surface brightness limit, but using the actual object footprint).
+
   Table:
    - When given a value of '_all', the '--noblank' option (that will remove
      all rows with a blank value in the given columns) will check all
@@ -111,6 +124,17 @@ See the end of the file for license conditions.
      9:00a.m. But in some cases, calibration images may be taken after
      that. So to be safer in general it was incremented by 2 hours.
 
+  MakeCatalog:
+   - Surface brightness limit calculations are now written as standard FITS
+     keywords in the output catalog/table. Until now, they were simply
+     stored as 'COMMENT' keywords with no name so it was hard to parse
+     them. From this version, the following keywords are also written into
+     the output table(s), see the "MakeCatalog output" section of the book
+     for more: 'SBLSTD', 'SBLNSIG', 'SBLMAGPX', 'SBLAREA', 'SBLMAG'.
+   - Upper-limit settings are also written into the output tables as
+     keywords (like surface brightness limit numbers above): 'UPNSIGMA',
+     'UPNUMBER', 'UPRNGNAM', 'UPRNGSEE', 'UPSCMLTP', 'UPSCTOL'.
+
   Library:
    - gal_fits_key_write_wcsstr: also takes WCS structure as argument.
    - gal_fits_key_read_from_ptr: providing a numerical datatype for the
diff --git a/bin/mkcatalog/args.h b/bin/mkcatalog/args.h
index 6582d38..a1c18c6 100644
--- a/bin/mkcatalog/args.h
+++ b/bin/mkcatalog/args.h
@@ -1229,6 +1229,20 @@ struct argp_option program_options[] =
       ui_column_codes_ll
     },
     {
+      "upperlimitsb",
+      UI_KEY_UPPERLIMITSB,
+      0,
+      0,
+      "Upper-limit surface brightness (mag/arcsec^2).",
+      UI_GROUP_COLUMNS_BRIGHTNESS,
+      0,
+      GAL_TYPE_INVALID,
+      GAL_OPTIONS_RANGE_ANY,
+      GAL_OPTIONS_NOT_MANDATORY,
+      GAL_OPTIONS_NOT_SET,
+      ui_column_codes_ll
+    },
+    {
       "upperlimitonesigma",
       UI_KEY_UPPERLIMITONESIGMA,
       0,
diff --git a/bin/mkcatalog/columns.c b/bin/mkcatalog/columns.c
index 69cc29d..e7340c7 100644
--- a/bin/mkcatalog/columns.c
+++ b/bin/mkcatalog/columns.c
@@ -258,6 +258,7 @@ columns_wcs_preparation(struct mkcatalogparams *p)
       case UI_KEY_HALFMAXSB:
       case UI_KEY_HALFSUMSB:
       case UI_KEY_AREAARCSEC2:
+      case UI_KEY_UPPERLIMITSB:
       case UI_KEY_SURFACEBRIGHTNESS:
         pixscale=gal_wcs_pixel_scale(p->objects->wcs);
         p->pixelarcsecsq=pixscale[0]*pixscale[1]*3600.0f*3600.0f;
@@ -1416,6 +1417,22 @@ columns_define_alloc(struct mkcatalogparams *p)
           oiflag[ OCOL_UPPERLIMIT_B ] = ciflag[ CCOL_UPPERLIMIT_B ] = 1;
           break;
 
+        case UI_KEY_UPPERLIMITSB:
+          name           = "UPPERLIMIT_SB";
+          unit           = "mag/arcsec^2";
+          ocomment       = "Upper limit surface brightness over its 
footprint.";
+          ccomment       = ocomment;
+          otype          = GAL_TYPE_FLOAT32;
+          ctype          = GAL_TYPE_FLOAT32;
+          disp_fmt       = GAL_TABLE_DISPLAY_FMT_FLOAT;
+          disp_width     = 8;
+          disp_precision = 3;
+          p->hasmag      = 1;
+          p->upperlimit  = 1;
+          oiflag[ OCOL_NUMALL       ] = ciflag[ CCOL_NUMALL       ] = 1;
+          oiflag[ OCOL_UPPERLIMIT_B ] = ciflag[ CCOL_UPPERLIMIT_B ] = 1;
+          break;
+
         case UI_KEY_UPPERLIMITONESIGMA:
           name           = "UPPERLIMIT_ONE_SIGMA";
           unit           = MKCATALOG_NO_UNIT;
@@ -2545,6 +2562,11 @@ columns_fill(struct mkcatalog_passparams *pp)
           ((float *)colarr)[oind] = MKC_MAG(oi[ OCOL_UPPERLIMIT_B ]);
           break;
 
+        case UI_KEY_UPPERLIMITSB:
+          ((float *)colarr)[oind] = MKC_SB( oi[ OCOL_UPPERLIMIT_B ],
+                                            oi[ OCOL_NUMALL ] );
+          break;
+
         case UI_KEY_UPPERLIMITONESIGMA:
           ((float *)colarr)[oind] = oi[ OCOL_UPPERLIMIT_S ];
           break;
@@ -2889,6 +2911,11 @@ columns_fill(struct mkcatalog_passparams *pp)
             ((float *)colarr)[cind] = MKC_MAG(ci[ CCOL_UPPERLIMIT_B ]);
             break;
 
+          case UI_KEY_UPPERLIMITSB:
+            ((float *)colarr)[cind] = MKC_SB( ci[ CCOL_UPPERLIMIT_B ],
+                                              ci[ CCOL_NUMALL ] );
+            break;
+
           case UI_KEY_UPPERLIMITONESIGMA:
             ((float *)colarr)[cind] = ci[ CCOL_UPPERLIMIT_S ];
             break;
diff --git a/bin/mkcatalog/mkcatalog.c b/bin/mkcatalog/mkcatalog.c
index 55fe6e7..25d1f82 100644
--- a/bin/mkcatalog/mkcatalog.c
+++ b/bin/mkcatalog/mkcatalog.c
@@ -338,86 +338,92 @@ mkcatalog_wcs_conversion(struct mkcatalogparams *p)
 
 
 void
-mkcatalog_write_inputs_in_comments(struct mkcatalogparams *p,
-                                   gal_list_str_t **comments, int withsky,
-                                   int withstd)
+mkcatalog_outputs_keys_numeric(gal_fits_list_key_t **keylist, void *number,
+                               uint8_t type, char *nameliteral,
+                               char *commentliteral, char *unitliteral)
 {
-  char *tmp, *str;
+  void *value;
+  value=gal_pointer_allocate(type, 1, 0, __func__, "value");
+  memcpy(value, number, gal_type_sizeof(type));
+  gal_fits_key_list_add_end(keylist, type, nameliteral, 0,
+                            value, 1, commentliteral, 0,
+                            unitliteral, 0);
+}
+
+
 
-  /* Basic classifiers for plain text outputs. */
-  if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
-    {
-      if( asprintf(&str, "--------- Input files ---------")<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
-    }
+
+
+void
+mkcatalog_outputs_keys_infiles(struct mkcatalogparams *p,
+                               gal_fits_list_key_t **keylist)
+{
+  char *stdname, *stdhdu, *stdvalcom;
+
+  gal_fits_key_list_title_add_end(keylist,
+                                  "Input files and/or configuration", 0);
 
   /* Object labels. */
-  if( asprintf(&str, "Objects: %s (hdu: %s).", p->objectsfile, p->cp.hdu)<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(comments, str, 0);
+  gal_fits_key_write_filename("INLAB", p->objectsfile, keylist, 0);
+  gal_fits_key_write_filename("INLABHDU", p->cp.hdu, keylist, 0);
 
   /* Clump labels. */
   if(p->clumps)
     {
-      if(asprintf(&str, "Clumps:  %s (hdu: %s).", p->usedclumpsfile,
-                  p->clumpshdu)<0)
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
+      gal_fits_key_write_filename("INCLU", p->usedclumpsfile, keylist, 0);
+      gal_fits_key_write_filename("INCLUHDU", p->clumpshdu, keylist, 0);
     }
 
-  /* Values dataset. */
+  /* Values image. */
   if(p->values)
     {
-      if( asprintf(&str, "Values:  %s (hdu: %s).", p->usedvaluesfile,
-                   p->valueshdu)<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
+      gal_fits_key_write_filename("INVAL", p->usedvaluesfile, keylist, 0);
+      gal_fits_key_write_filename("INVALHDU", p->valueshdu, keylist, 0);
     }
 
-  /* Sky dataset. */
-  if(withsky && p->sky)
+  /* Sky image/value. */
+  if(p->sky)
     {
       if(p->sky->size==1)
-        {
-          if( asprintf(&str, "Sky:     %g.", *((float *)(p->sky->array)) )<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-        }
+        mkcatalog_outputs_keys_numeric(keylist, p->sky->array,
+                                       p->sky->type, "INSKYVAL",
+                                       "Value of Sky used (a single number).",
+                                       NULL);
       else
         {
-          if( asprintf(&str, "Sky:     %s (hdu: %s).", p->usedskyfile,
-                       p->skyhdu)<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
+          gal_fits_key_write_filename("INSKY", p->usedskyfile, keylist, 0);
+          gal_fits_key_write_filename("INSKYHDU", p->skyhdu, keylist, 0);
         }
-      gal_list_str_add(comments, str, 0);
     }
 
-  /* Sky standard deviation dataset. */
-  tmp = p->variance ? "VAR" : "STD";
-  if(withstd && p->std)
+  /* Standard deviation (or variance) image. */
+  if(p->variance)
+    {
+      stdname="INVAR"; stdhdu="INVARHDU";
+      stdvalcom="Value of Sky variance (a single number).";
+    }
+  else
+    {
+      stdname="INSTD"; stdhdu="INSTDHDU";
+      stdvalcom="Value of Sky STD (a single number).";
+    }
+  if(p->std)
     {
       if(p->std->size==1)
-        {
-          if( asprintf(&str, "Sky %s: %g.", tmp,
-                       *((float *)(p->std->array)) )<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-        }
+        mkcatalog_outputs_keys_numeric(keylist, p->std->array, p->std->type,
+                                       stdname, stdvalcom, NULL);
       else
         {
-          if( asprintf(&str, "Sky %s: %s (hdu: %s).", tmp, p->usedstdfile,
-                       p->stdhdu)<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
+          gal_fits_key_write_filename(stdname, p->usedstdfile, keylist, 0);
+          gal_fits_key_write_filename(stdhdu, p->stdhdu, keylist, 0);
         }
-      gal_list_str_add(comments, str, 0);
     }
 
   /* Upper limit mask. */
   if(p->upmaskfile)
     {
-      if( asprintf(&str, "Upperlimit mask: %s (hdu: %s).", p->upmaskfile,
-                   p->upmaskhdu)<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
+      gal_fits_key_write_filename("INUPM", p->upmaskfile, keylist, 0);
+      gal_fits_key_write_filename("INUPMHDU", p->upmaskhdu, keylist, 0);
     }
 }
 
@@ -425,165 +431,145 @@ mkcatalog_write_inputs_in_comments(struct 
mkcatalogparams *p,
 
 
 
-/* Write the similar information. */
-static gal_list_str_t *
-mkcatalog_outputs_same_start(struct mkcatalogparams *p, int o0c1,
-                             char *ObjClump)
+/* Write the output keywords. */
+static gal_fits_list_key_t *
+mkcatalog_outputs_keys(struct mkcatalogparams *p, int o0c1)
 {
-  char *str, *tstr;
-  double pixarea=NAN;
-  gal_list_str_t *comments=NULL;
-
-  if( asprintf(&str, "%s catalog of %s", o0c1 ? "Object" : "Clump",
-               PROGRAM_STRING)<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(&comments, str, 0);
-
-  /* If in a Git controlled directory and output isn't a FITS file (in
-     FITS, this will be automatically included). */
-  if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT && gal_git_describe())
-    {
-      if(asprintf(&str, "Working directory commit %s", gal_git_describe())<0)
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
-    }
-
-  /* Write the date. However, 'ctime' is going to put a new-line character
-     in the end of its string, so we are going to remove it manually. */
-  if( asprintf(&str, "%s started on %s", PROGRAM_NAME, ctime(&p->rawtime))<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  str[strlen(str)-1]='\0';
-  gal_list_str_add(&comments, str, 0);
+  float pixarea=NAN, fvalue;
+  gal_fits_list_key_t *keylist=NULL;
 
+  /* First, add the file names. */
+  mkcatalog_outputs_keys_infiles(p, &keylist);
 
-  /* Write the basic information. */
-  mkcatalog_write_inputs_in_comments(p, &comments, 1, 1);
+  /* Type of catalog. */
+  gal_fits_key_list_add_end(&keylist, GAL_TYPE_STRING, "CATTYPE", 0,
+                            o0c1 ? "clumps" : "objects", 0,
+                            "Type of catalog ('object' or 'clump').", 0,
+                            NULL, 0);
 
-
-  /* Write other supplementary information. */
-  if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
-    {
-      if( asprintf(&str, "--------- Supplementary information ---------")<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
-    }
+  /* Add project commit information when in a Git-controlled directory and
+     the output isn't a FITS file (in FITS, this will be automatically
+     included). */
+  if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT && gal_git_describe())
+    gal_fits_key_list_add_end(&keylist, GAL_TYPE_STRING, "COMMIT", 0,
+                              gal_git_describe(), 1,
+                              "Git commit in running directory.", 0,
+                              NULL, 0);
 
   /* Pixel area. */
   if(p->objects->wcs)
     {
       pixarea=gal_wcs_pixel_area_arcsec2(p->objects->wcs);
       if( isnan(pixarea)==0 )
-        {
-          if( asprintf(&str, "Pixel area (arcsec^2): %g", pixarea)<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-          gal_list_str_add(&comments, str, 0);
-        }
+        mkcatalog_outputs_keys_numeric(&keylist, &pixarea,
+                                       GAL_TYPE_FLOAT32, "PIXAREA",
+                                       "Pixel area of input image.",
+                                       "arcsec^2");
     }
 
-  /* Zeropoint magnitude */
-  if(p->hasmag)
-    {
-      if( asprintf(&str, "Zeropoint magnitude: %.4f", p->zeropoint)<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
-    }
+  /* Zeropoint magnitude. */
+  if( !isnan(p->zeropoint) )
+    mkcatalog_outputs_keys_numeric(&keylist, &p->zeropoint,
+                                   GAL_TYPE_FLOAT32, "ZEROPNT",
+                                   "Zeropoint used for magnitude.",
+                                   "mag");
 
-  /* Print surface brightness limits. */
+  /* Add the title for the keywords. */
+  gal_fits_key_list_title_add_end(&keylist, "Surface brightness limit (SBL)", 
0);
+
+  /* Print surface brightness limit. */
   if( !isnan(p->medstd) && !isnan(p->sfmagnsigma) )
     {
+      /* Used noise value (per pixel) and multiple of sigma. */
+      mkcatalog_outputs_keys_numeric(&keylist, &p->medstd,
+                                     GAL_TYPE_FLOAT32, "SBLSTD",
+                                     "Pixel STD for surface brightness limit.",
+                                     NULL);
+      mkcatalog_outputs_keys_numeric(&keylist, &p->sfmagnsigma,
+                                     GAL_TYPE_FLOAT32, "SBLNSIG",
+                                     "Sigma multiple for surface brightness "
+                                     "limit.", NULL);
+
       /* Only print magnitudes if a zeropoint is given. */
       if( !isnan(p->zeropoint) )
         {
-          /* Per pixel. */
-          if( asprintf(&str, "%g sigma surface brightness (magnitude/pixel): "
-                       "%.3f", p->sfmagnsigma,
-                       gal_units_counts_to_mag(p->sfmagnsigma * p->medstd,
-                                               p->zeropoint) )<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-          gal_list_str_add(&comments, str, 0);
-
-          /* Requested projected area: if a pixel area could be measured (a
-             WCS was given), then also estimate the surface brightness over
-             one arcsecond^2. From the pixel area, we know how many pixels
-             are necessary to fill the requested projected area (in
-             arcsecond^2). We also know that as the number of samples
-             (pixels) increases (to N), the noise increases by sqrt(N), see
-             the full discussion in the book. */
-          if(!isnan(pixarea) && !isnan(p->sfmagarea))
+          /* Per pixel, Surface brightness limit magnitude. */
+          fvalue=gal_units_counts_to_mag(p->sfmagnsigma * p->medstd,
+                                         p->zeropoint);
+          mkcatalog_outputs_keys_numeric(&keylist, &fvalue,
+                                         GAL_TYPE_FLOAT32, "SBLMAGPX",
+                                         "Surface brightness limit per pixel.",
+                                         "mag/pix");
+
+          /* Only print the SBL in fixed area if a WCS is present. */
+          if(p->objects->wcs)
             {
-              /* Prepare the comment/information. */
-              if(p->sfmagarea==1.0f)
-                tstr=NULL;
-              else
-                if( asprintf(&tstr, "%g-", p->sfmagarea)<0 )
-                  error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-              if( asprintf(&str, "%g sigma surface brightness "
-                           "(magnitude/%sarcsec^2): %.3f", p->sfmagnsigma,
-                           tstr ? tstr : "",
-                           gal_units_counts_to_mag(p->sfmagnsigma
-                                                   * p->medstd
-                                                   * sqrt( p->sfmagarea / 
pixarea),
-                                                   p->zeropoint) ) <0 )
-                error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-
-              /* Add the final string/line to the catalog comments. */
-              gal_list_str_add(&comments, str, 0);
-
-              /* Clean up (if necessary). */
-              if (tstr)
-                {
-                  free(tstr);
-                  tstr=NULL;
-                }
+              /* Area used for measuring SBL. */
+              mkcatalog_outputs_keys_numeric(&keylist, &p->sfmagarea,
+                                             GAL_TYPE_FLOAT32, "SBLAREA",
+                                             "Area for surface brightness 
limit.",
+                                             "arcsec^2");
+
+              /* Per area, Surface brightness limit magnitude. */
+              fvalue=gal_units_counts_to_mag(p->sfmagnsigma
+                                             * p->medstd
+                                             / sqrt( p->sfmagarea
+                                                     * pixarea),
+                                             p->zeropoint);
+              mkcatalog_outputs_keys_numeric(&keylist, &fvalue,
+                                             GAL_TYPE_FLOAT32, "SBLMAG",
+                                             "Surf. bright. limit in SBLAREA.",
+                                             "mag/arcsec^2");
             }
+          else
+            gal_fits_key_list_fullcomment_add_end(&keylist, "Can't "
+                   "write surface brightness limiting magnitude values "
+                   "in fixed area ('SBLAREA' and 'SBLMAG' keywords) "
+                   "because input doesn't have a world coordinate system "
+                   "to identify the pixel scale.", 0);
         }
-
-      /* Notice: */
-      if( asprintf(&str, "Pixel STD for surface brightness calculation%s: %f",
-                   (!isnan(pixarea) && !isnan(p->sfmagarea))?"s":"",
-                   p->medstd)<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
+      else
+        gal_fits_key_list_fullcomment_add_end(&keylist, "Can't write "
+               "surface brightness limiting magnitude values (e.g., "
+               "'SBLMAG' or 'SBLMAGPX' keywords) because no "
+               "'--zeropoint' has been given.", 0);
     }
   else
     {
-      gal_checkset_allocate_copy("No surface brightness calcuations "
-                                 "because no STD image used.", &str);
-      gal_list_str_add(&comments, str, 0);
-      gal_checkset_allocate_copy("Ask for column that uses the STD image, "
-                                 "or '--forcereadstd'.", &str);
-      gal_list_str_add(&comments, str, 0);
+      gal_fits_key_list_fullcomment_add_end(&keylist, "No surface "
+             "brightness calcuations (e.g., 'SBLMAG' or 'SBLMAGPX' "
+             "keywords) because STD image didn't have the 'MEDSTD' "
+             "keyword. There are two solutions: 1) Call with "
+             "'--forcereadstd'. 2) Measure the median noise level "
+             "manually (possibly with Gnuastro's Arithmetic program) "
+             "and put the value in the 'MEDSTD' keyword of the STD "
+             "image.", 0);
+      gal_fits_key_list_fullcomment_add_end(&keylist, "", 0);
     }
 
   /* The count-per-second correction. */
   if(p->cpscorr>1.0f)
-    {
-      if( asprintf(&str, "Counts-per-second correction: %.3f", p->cpscorr)<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
-    }
+    mkcatalog_outputs_keys_numeric(&keylist, &p->cpscorr,
+                                   GAL_TYPE_FLOAT32, "CPSCORR",
+                                   "Counts-per-second correction.",
+                                   NULL);
 
   /* Print upper-limit parameters. */
   if(p->upperlimit)
-    upperlimit_write_comments(p, &comments, 1);
+    upperlimit_write_keys(p, &keylist, 1);
 
-  /* Start column metadata. */
+  /* In plain-text outputs, put a title for column metadata. */
   if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
-    {
-      if( asprintf(&str, "--------- Table columns ---------")<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, str, 0);
-    }
+    gal_fits_key_list_title_add_end(&keylist, "Column metadata", 0);
 
-  /* Return the comments. */
-  return comments;
+  /* Return the list of keywords. */
+  return keylist;
 }
 
 
 
 
 
-
 /* Since all the measurements were done in parallel (and we didn't know the
    number of clumps per object a-priori), the clumps informtion is just
    written in as they are measured. Here, we'll sort the clump columns by
@@ -646,19 +632,20 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
 {
   size_t i, scounter;
   char str[200], *fname;
-  gal_list_str_t *comments;
+  gal_fits_list_key_t *keylist;
+  gal_list_str_t *comments=NULL;
   int outisfits=gal_fits_name_is_fits(p->objectsout);
 
   /* If a catalog is to be generated. */
   if(p->objectcols)
     {
       /* OBJECT catalog */
-      comments=mkcatalog_outputs_same_start(p, 0, "Detection");
+      keylist=mkcatalog_outputs_keys(p, 0);
 
       /* Reverse the comments list (so it is printed in the same order
          here), write the objects catalog and free the comments. */
       gal_list_str_reverse(&comments);
-      gal_table_write(p->objectcols, NULL, comments, p->cp.tableformat,
+      gal_table_write(p->objectcols, &keylist, NULL, p->cp.tableformat,
                       p->objectsout, "OBJECTS", 0);
       gal_list_str_free(comments, 1);
 
@@ -667,7 +654,7 @@ mkcatalog_write_outputs(struct mkcatalogparams *p)
       if(p->clumps)
         {
           /* Make the comments. */
-          comments=mkcatalog_outputs_same_start(p, 1, "Clumps");
+          keylist=mkcatalog_outputs_keys(p, 1);
 
           /* Write objects catalog
              ---------------------
diff --git a/bin/mkcatalog/mkcatalog.h b/bin/mkcatalog/mkcatalog.h
index 9cbdbd8..75e9c7a 100644
--- a/bin/mkcatalog/mkcatalog.h
+++ b/bin/mkcatalog/mkcatalog.h
@@ -45,9 +45,13 @@ struct mkcatalog_passparams
 };
 
 void
-mkcatalog_write_inputs_in_comments(struct mkcatalogparams *p,
-                                   gal_list_str_t **comments, int withsky,
-                                   int withstd);
+mkcatalog_outputs_keys_numeric(gal_fits_list_key_t **keylist, void *number,
+                               uint8_t type, char *nameliteral,
+                               char *commentliteral, char *unitliteral);
+
+void
+mkcatalog_outputs_keys_infiles(struct mkcatalogparams *p,
+                               gal_fits_list_key_t **keylist);
 
 void
 mkcatalog(struct mkcatalogparams *p);
diff --git a/bin/mkcatalog/ui.c b/bin/mkcatalog/ui.c
index 37bef56..d84c9c3 100644
--- a/bin/mkcatalog/ui.c
+++ b/bin/mkcatalog/ui.c
@@ -1423,10 +1423,20 @@ ui_preparations_read_keywords(struct mkcatalogparams *p)
           keys[0].array=&minstd;              keys[1].array=&p->medstd;
           gal_fits_key_read(p->usedstdfile, p->stdhdu, keys, 0, 0);
 
-          /* If the two keywords couldn't be read. We don't want to slow down
-             the user for the median (which needs sorting). So we'll just
-             calculate the minimum which is necessary for the 'p->cpscorr'. */
-          if(keys[1].status) p->medstd=NAN;
+          /* If the two keywords couldn't be read. We don't want to slow
+             down the user for the median (which needs sorting). So we'll
+             just calculate if if '--forcereadstd' is called. However, we
+             need the minimum for 'p->cpscorr'. */
+          if(keys[1].status)
+            {
+              if(p->forcereadstd)
+                {
+                  tmp=gal_statistics_median(p->std, 0);
+                  p->medstd=*((float *)(tmp->array));
+                }
+              else
+                p->medstd=NAN;
+            }
           if(keys[0].status)
             {
               /* Calculate the minimum STD. */
diff --git a/bin/mkcatalog/ui.h b/bin/mkcatalog/ui.h
index 572720e..3d390da 100644
--- a/bin/mkcatalog/ui.h
+++ b/bin/mkcatalog/ui.h
@@ -158,6 +158,7 @@ enum option_keys_enum
   UI_KEY_MAXIMUM,
   UI_KEY_CLUMPSMAGNITUDE,
   UI_KEY_UPPERLIMIT,
+  UI_KEY_UPPERLIMITSB,
   UI_KEY_UPPERLIMITONESIGMA,
   UI_KEY_UPPERLIMITSIGMA,
   UI_KEY_UPPERLIMITQUANTILE,
diff --git a/bin/mkcatalog/upperlimit.c b/bin/mkcatalog/upperlimit.c
index b1b27bc..e108afe 100644
--- a/bin/mkcatalog/upperlimit.c
+++ b/bin/mkcatalog/upperlimit.c
@@ -35,6 +35,8 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 #include <gnuastro/dimension.h>
 #include <gnuastro/statistics.h>
 
+#include <gnuastro-internal/checkset.h>
+
 #include "main.h"
 
 #include "ui.h"
@@ -277,82 +279,64 @@ upperlimit_random_position(struct mkcatalog_passparams 
*pp, gal_data_t *tile,
    used/necessary, so to avoid confusion, we won't write it.
 */
 void
-upperlimit_write_comments(struct mkcatalogparams *p,
-                          gal_list_str_t **comments, int withsigclip)
+upperlimit_write_keys(struct mkcatalogparams *p,
+                      gal_fits_list_key_t **keylist, int withsigclip)
 {
-  char *str;
-
-  if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
-    {
-      if(asprintf(&str, "--------- Upper-limit measurement ---------")<0)
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
-    }
-
-  if( asprintf(&str, "Number of usable random samples: %zu", p->upnum)<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(comments, str, 0);
-
+  /* Write a title for  */
+  gal_fits_key_list_title_add_end(keylist, "Upper-limit (UP) parameters", 0);
+
+  /* Basic settings. */
+  gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT32, "UPNSIGMA", 0,
+                            &p->upnsigma, 0,
+                            "Multiple of sigma to measure upper-limit.", 0,
+                            NULL, 0);
+  gal_fits_key_list_add_end(keylist, GAL_TYPE_SIZE_T, "UPNUMBER", 0,
+                            &p->upnum, 0,
+                            "Number of usable random samples.", 0,
+                            "counter", 0);
+  gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRNGNAM", 0,
+                            (void *)(p->rng_name), 0,
+                            "Random number generator name.", 0, NULL, 0);
+  mkcatalog_outputs_keys_numeric(keylist, &p->rng_seed,
+                                 GAL_TYPE_ULONG, "UPRNGSEE",
+                                 "Random number generator seed.", NULL);
+
+  /* Range of upper-limit values. */
   if(p->uprange)
     {
-      switch(p->objects->ndim)
-        {
-        case 2:
-          if( asprintf(&str, "Range of random samples about target: "
-                       "%zu, %zu", p->uprange[1], p->uprange[0])<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-          break;
-        case 3:
-          if( asprintf(&str, "Range of random samples about target: %zu, "
-                       "%zu, %zu", p->uprange[2], p->uprange[1],
-                       p->uprange[0])<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-          break;
-        default:
-          error(EXIT_FAILURE, 0, "%s: a bug! Please contact us at %s to "
-                "address the problem. The value %zu is not recognized for "
-                "'p->input->ndim'", __func__, PACKAGE_BUGREPORT,
-                p->objects->ndim);
-        }
-      gal_list_str_add(comments, str, 0);
+      gal_fits_key_list_add_end(keylist, GAL_TYPE_SIZE_T, "UPRANGE1", 0,
+                                &p->uprange[p->objects->ndim-1], 0,
+                                "Range about target in axis 1.", 0,
+                                "pixels", 0);
+      gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRANGE2", 0,
+                                &p->uprange[p->objects->ndim==2 ? 0 : 1], 0,
+                                "Range about target in axis 2.", 0,
+                                "pixels", 0);
+      if(p->objects->ndim==3)
+        gal_fits_key_list_add_end(keylist, GAL_TYPE_STRING, "UPRANGE3", 0,
+                                  &p->uprange[0], 0,
+                                  "Range about target in axis 3.", 0,
+                                  "pixels", 0);
     }
 
-  if( asprintf(&str, "Random number generator name: %s", p->rng_name)<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(comments, str, 0);
-
-  if( asprintf(&str, "Random number generator seed: %lu", p->rng_seed)<0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(comments, str, 0);
-
+  /* If the upper-limit measurement included sigma-clipping. */
   if(withsigclip)
     {
-      if( asprintf(&str, "Multiple of STD used for sigma-clipping: %.3f",
-                   p->upsigmaclip[0])<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(comments, str, 0);
-
+      gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCMLTP", 0,
+                                &p->upsigmaclip[0], 0,
+                                "Multiple of STD used for sigma-clipping.", 0,
+                                NULL, 0);
       if(p->upsigmaclip[1]>=1.0f)
-        {
-          if( asprintf(&str, "Number of clips for sigma-clipping: %.0f",
-                       p->upsigmaclip[1])<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-        }
+        gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCNUM", 0,
+                                  &p->upsigmaclip[1], 0,
+                                  "Number of clips for sigma-clipping.", 0,
+                                  NULL, 0);
       else
-        {
-          if( asprintf(&str, "Tolerance level to sigma-clipping: %.3f",
-                       p->upsigmaclip[1])<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-        }
-      gal_list_str_add(comments, str, 0);
+        gal_fits_key_list_add_end(keylist, GAL_TYPE_FLOAT64, "UPSCTOL", 0,
+                                  &p->upsigmaclip[1], 0,
+                                  "Tolerance level to sigma-clipping.", 0,
+                                  NULL, 0);
 
-      if( p->oiflag[ OCOL_UPPERLIMIT_B ] )
-        {
-          if( asprintf(&str, "Multiple of sigma-clipped STD for upper-limit: "
-                       "%.3f", p->upnsigma)<0 )
-            error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-          gal_list_str_add(comments, str, 0);
-        }
     }
 }
 
@@ -367,8 +351,7 @@ upperlimit_write_check(struct mkcatalogparams *p, 
gal_list_sizet_t *check_x,
                        gal_list_f32_t *check_s)
 {
   float *sarr;
-  char *tmp=NULL, *tmp2=NULL;
-  gal_list_str_t *comments=NULL;
+  gal_fits_list_key_t *keylist=NULL;
   size_t *xarr, *yarr, *zarr=NULL, tnum, ttnum, num;
   gal_data_t *x=NULL, *y=NULL, *z=NULL, *s=NULL; /* To avoid warnings. */
 
@@ -418,35 +401,30 @@ upperlimit_write_check(struct mkcatalogparams *p, 
gal_list_sizet_t *check_x,
 
 
   /* Write exactly what object/clump this table is for. */
+  gal_fits_key_list_title_add_end(&keylist, "Target for upper-limit check", 0);
+  mkcatalog_outputs_keys_numeric(&keylist, &p->checkuplim[0],
+                                 GAL_TYPE_INT32, "UPCHKOBJ",
+                                 "Object label for upper-limit check target.",
+                                 NULL);
   if( p->checkuplim[1]!=GAL_BLANK_INT32 )
-    if( asprintf(&tmp2, ", Clump %d", p->checkuplim[1]) <0 )
-      error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  if( asprintf(&tmp, "Upperlimit distribution for Object %d%s",
-               p->checkuplim[0],
-               ( p->checkuplim[1]==GAL_BLANK_INT32
-                 ? "" : tmp2) ) <0 )
-    error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-  gal_list_str_add(&comments, tmp, 0);
-  if(tmp2) {free(tmp2); tmp2=NULL;}
-
-
-  /* Write the basic info, and conclude the comments. */
-  mkcatalog_write_inputs_in_comments(p, &comments, 0, 0);
-  upperlimit_write_comments(p, &comments, 0);
+    mkcatalog_outputs_keys_numeric(&keylist, &p->checkuplim[1],
+                                   GAL_TYPE_INT32, "UPCHKCLU",
+                                   "Clump label for upper-limit check target.",
+                                   NULL);
+
+
+  /* Write the basic info, and conclude the keywords. */
+  mkcatalog_outputs_keys_infiles(p, &keylist);
+  upperlimit_write_keys(p, &keylist, 0);
   if(p->cp.tableformat==GAL_TABLE_FORMAT_TXT)
-    {
-      if( asprintf(&tmp, "--------- Table columns ---------")<0 )
-        error(EXIT_FAILURE, 0, "%s: asprintf allocation", __func__);
-      gal_list_str_add(&comments, tmp, 0);
-    }
+    gal_fits_key_list_title_add_end(&keylist, "Column metadata", 0);
 
 
   /* Define a list from the containers and write them into a table. */
   x->next=y;
   if(check_z) { y->next=z; z->next=s; }
   else        { y->next=s;            }
-  gal_list_str_reverse(&comments);
-  gal_table_write(x, NULL, comments, p->cp.tableformat, p->upcheckout,
+  gal_table_write(x, &keylist, NULL, p->cp.tableformat, p->upcheckout,
                   "UPPERLIMIT_CHECK", 0);
 
   /* Inform the user. */
@@ -488,12 +466,33 @@ upperlimit_measure(struct mkcatalog_passparams *pp, 
int32_t clumplab,
         {
           switch(column->status)
             {
-            /* Columns that depend on the sigma of the distribution. */
-            case UI_KEY_UPPERLIMIT:
-            case UI_KEY_UPPERLIMITMAG:
-            case UI_KEY_UPPERLIMITSIGMA:
-            case UI_KEY_UPPERLIMITONESIGMA:
+            /* Quantile column. */
+            case UI_KEY_UPPERLIMITQUANTILE:
+
+              /* Also only necessary once (if requested multiple times). */
+              if(qfunc==NULL)
+                {
+                  /* Similar to the case for sigma-clipping, we'll need to
+                     keep the size here also. */
+                  init_size=pp->up_vals->size;
+                  sum=gal_data_alloc(NULL, GAL_TYPE_FLOAT32, 1, &one, NULL, 0,
+                                     -1, 1, NULL, NULL, NULL);
+                  ((float *)(sum->array))[0]=o[clumplab?CCOL_SUM:OCOL_SUM];
+                  qfunc=gal_statistics_quantile_function(pp->up_vals, sum, 1);
+
+                  /* Fill in the column. */
+                  col = clumplab ? CCOL_UPPERLIMIT_Q : OCOL_UPPERLIMIT_Q;
+                  pp->up_vals->size=pp->up_vals->dsize[0]=init_size;
+                  o[col] = ((double *)(qfunc->array))[0];
+
+                  /* Clean up. */
+                  gal_data_free(sum);
+                  gal_data_free(qfunc);
+                }
+              break;
 
+            /* Columns that depend on the sigma of the distribution. */
+            default:
               /* We only need to do this once, but the columns can be
                  requested in any order. */
               if(sigclip==NULL)
@@ -525,31 +524,6 @@ upperlimit_measure(struct mkcatalog_passparams *pp, 
int32_t clumplab,
                   gal_data_free(sigclip);
                 }
               break;
-
-            /* Quantile column. */
-            case UI_KEY_UPPERLIMITQUANTILE:
-
-              /* Also only necessary once (if requested multiple times). */
-              if(qfunc==NULL)
-                {
-                  /* Similar to the case for sigma-clipping, we'll need to
-                     keep the size here also. */
-                  init_size=pp->up_vals->size;
-                  sum=gal_data_alloc(NULL, GAL_TYPE_FLOAT32, 1, &one, NULL, 0,
-                                     -1, 1, NULL, NULL, NULL);
-                  ((float *)(sum->array))[0]=o[clumplab?CCOL_SUM:OCOL_SUM];
-                  qfunc=gal_statistics_quantile_function(pp->up_vals, sum, 1);
-
-                  /* Fill in the column. */
-                  col = clumplab ? CCOL_UPPERLIMIT_Q : OCOL_UPPERLIMIT_Q;
-                  pp->up_vals->size=pp->up_vals->dsize[0]=init_size;
-                  o[col] = ((double *)(qfunc->array))[0];
-
-                  /* Clean up. */
-                  gal_data_free(sum);
-                  gal_data_free(qfunc);
-                }
-              break;
             }
         }
     }
diff --git a/bin/mkcatalog/upperlimit.h b/bin/mkcatalog/upperlimit.h
index 666c3a0..3ddb5ca 100644
--- a/bin/mkcatalog/upperlimit.h
+++ b/bin/mkcatalog/upperlimit.h
@@ -24,8 +24,8 @@ along with Gnuastro. If not, see 
<http://www.gnu.org/licenses/>.
 #define UPPERLIMIT_H
 
 void
-upperlimit_write_comments(struct mkcatalogparams *p,
-                          gal_list_str_t **comments, int withsigclip);
+upperlimit_write_keys(struct mkcatalogparams *p,
+                      gal_fits_list_key_t **keylist, int withsigclip);
 
 void
 upperlimit_calculate(struct mkcatalog_passparams *pp);
diff --git a/doc/gnuastro.texi b/doc/gnuastro.texi
index de00c64..45ad0c6 100644
--- a/doc/gnuastro.texi
+++ b/doc/gnuastro.texi
@@ -273,12 +273,9 @@ Detecting large extended targets
 
 * Downloading and validating input data::  How to get and check the input data.
 * NoiseChisel optimization::    Detect the extended and diffuse wings.
+* Image surface brightness limit::  Standards to quantify the noise level.
 * Achieved surface brightness level::  Calculate the outer surface brightness.
-
-Downloading and validating input data
-
-* NoiseChisel optimization::    Optimize NoiseChisel to dig very deep.
-* Achieved surface brightness level::  Measure how much you detected.
+* Extract clumps and objects::  Find sub-structure over the detections.
 
 Installation
 
@@ -525,6 +522,7 @@ Invoking Segment
 MakeCatalog
 
 * Detection and catalog production::  Discussing why/how to treat these 
separately
+* Brightness flux magnitude::   More on Magnitudes, surface brightness and etc.
 * Quantifying measurement limits::  For comparing different catalogs.
 * Measuring elliptical parameters::  Estimating elliptical parameters.
 * Adding new columns to MakeCatalog::  How to add new columns.
@@ -550,7 +548,6 @@ MakeProfiles
 
 * Modeling basics::             Astronomical modeling basics.
 * If convolving afterwards::    Considerations for convolving later.
-* Brightness flux magnitude::   About these measures of energy.
 * Profile magnitude::           Definition of total profile magnitude.
 * Invoking astmkprof::          Inputs and Options for MakeProfiles.
 
@@ -601,21 +598,21 @@ Invoking CosmicCalculator
 Installed scripts
 
 * Sort FITS files by night::    Sort many files by date.
-* SAO DS9 region files from table::  Create ds9 region file from a table.
 * Generate radial profile::     Radial profile of an object in an image.
+* SAO DS9 region files from table::  Create ds9 region file from a table.
 
 Sort FITS files by night
 
 * Invoking astscript-sort-by-night::  Inputs and outputs to this script.
 
-SAO DS9 region files from table
-
-* Invoking astscript-make-ds9-reg::  How to call astscript-make-ds9-reg
-
 Generate radial profile
 
 * Invoking astscript-radial-profile::  How to call astscript-radial-profile
 
+SAO DS9 region files from table
+
+* Invoking astscript-make-ds9-reg::  How to call astscript-make-ds9-reg
+
 Library
 
 * Review of library fundamentals::  Guide on libraries and linking.
@@ -4244,7 +4241,9 @@ Due to its more peculiar low surface brightness 
structure/features, we'll focus
 @menu
 * Downloading and validating input data::  How to get and check the input data.
 * NoiseChisel optimization::    Detect the extended and diffuse wings.
+* Image surface brightness limit::  Standards to quantify the noise level.
 * Achieved surface brightness level::  Calculate the outer surface brightness.
+* Extract clumps and objects::  Find sub-structure over the detections.
 @end menu
 
 @node Downloading and validating input data, NoiseChisel optimization, 
Detecting large extended targets, Detecting large extended targets
@@ -4329,13 +4328,7 @@ Here, we don't need the compressed file any more, so 
we'll just let @command{bun
 $ bunzip2 r.fits.bz2
 @end example
 
-
-@menu
-* NoiseChisel optimization::    Optimize NoiseChisel to dig very deep.
-* Achieved surface brightness level::  Measure how much you detected.
-@end menu
-
-@node NoiseChisel optimization, Achieved surface brightness level, Downloading 
and validating input data, Detecting large extended targets
+@node NoiseChisel optimization, Image surface brightness limit, Downloading 
and validating input data, Detecting large extended targets
 @subsection NoiseChisel optimization
 In @ref{Detecting large extended targets} we downloaded the single exposure 
SDSS image.
 Let's see how NoiseChisel operates on it with its default parameters:
@@ -4579,19 +4572,251 @@ However, given the many problems in existing ``smart'' 
solutions, such automatic
 So even when they are implemented, we would strongly recommend quality checks 
for a robust analysis.
 @end cartouche
 
-@node Achieved surface brightness level,  , NoiseChisel optimization, 
Detecting large extended targets
-@subsection Achieved surface brightness level
+@node Image surface brightness limit, Achieved surface brightness level, 
NoiseChisel optimization, Detecting large extended targets
+@subsection Image surface brightness limit
+@cindex Surface brightness limit
+@cindex Limit, surface brightness
 In @ref{NoiseChisel optimization} we showed how to customize NoiseChisel for a 
single-exposure SDSS image of the M51 group.
-Let's measure how deep we carved the signal out of noise.
-For this measurement, we'll need to estimate the average flux on the outer 
edges of the detection.
-Fortunately all this can be done with a few simple commands (and no 
higher-level language mini-environments like Python or IRAF) using 
@ref{Arithmetic} and @ref{MakeCatalog}.
+When presenting your detection results in a paper or scientific conference, 
usually the first thing that someone will ask (if you don't explicity say it!), 
is the dataset's @emph{surface brightness limit} (a standard measure of the 
noise level), and your target's surface brightness (a measure of the signal, 
either in the center or outskirts, depending on context).
+For more on the basics of these important concepts please see @ref{Quantifying 
measurement limits}).
+Here, we'll measure these values for this image.
+
+Let's start by measuring the surface brightness limit masking all the detected 
pixels and have a look at the noise distribution with the 
@command{astarithmetic} and @command{aststatistics} commands below.
+
+@example
+$ astarithmetic r_detected.fits -hINPUT-NO-SKY set-in \
+                r_detected.fits -hDETECTIONS set-det \
+                in det nan where -odet-masked.fits
+$ ds9 det-masked.fits
+$ aststatistics det-masked.fits
+@end example
+
+@noindent
+From the ASCII histogram, we see that the distribution is roughly symmetric.
+We can also quantify this by measuring the skewness (difference between mean 
and median, divided by the standard deviation):
+
+@example
+$ aststatistics det-masked.fits --mean --median --std \
+                | awk '@{print ($1-$2)/$3@}'
+@end example
+
+@noindent
+Showing that the mean is larger than the median by @mymath{0.08\sigma}, in 
other words, as we saw in @ref{NoiseChisel optimization}, a very small residual 
signal still remains in the undetected regions and it was up to you as an 
exercise to improve it.
+So let's continue with this value.
+Now, we will use the masked image and the surface brightness limit equation in 
@ref{Quantifying measurement limits} to measure the @mymath{3\sigma} surface 
brightness limit over an area of @mymath{25 \rm{arcsec}^2}:
+
+@example
+$ nsigma=3
+$ zeropoint=22.5
+$ areaarcsec2=25
+$ std=$(aststatistics det-masked.fits --sigclip-std)
+$ pixarcsec2=$(astfits det-masked.fits --pixelscale --quiet \
+                       | awk '@{print $3*3600*3600@}')
+$ astarithmetic --quiet $nsigma $std x \
+                $areaarcsec2 $pixarcsec2 x \
+                sqrt / $zeropoint counts-to-mag
+26.0241
+@end example
+
+The customizable steps above are good for any type of mask.
+For example your field of view may contain a very deep part so you need to 
mask all the shallow parts @emph{as well as} the detections before these steps.
+But when your image is flat (like this), there is a much simpler method to 
obtain the same value through MakeCatalog (when the standard deviation image is 
made by NoiseChisel).
+NoiseChisel has already calculated the minimum (@code{MINSTD}), maximum 
(@code{MAXSTD}) and median (@code{MEDSTD}) standard deviation within the tiles 
during its processing and has stored them as FITS keywords within the 
@code{SKY_STD} HDU.
+You can see them by piping all the keywords in this HDU into @command{grep}.
+In Grep, each @samp{.} represents one character that can be anything so 
@code{M..STD} will match all three keywords mentioned above.
+
+@example
+$ astfits r_detected.fits --hdu=SKY_STD | grep 'M..STD'
+@end example
+
+The @code{MEDSTD} value is very similar to the standard deviation derived 
above, so we can safely use it instead of having to mask and run Statistics.
+In fact, MakeCatalog also uses this keyword and will report the dataset's 
@mymath{n\sigma} surface brightness limit as keywords in the output (not as 
measurement columns, since its related to the noise, not labeled signal):
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --output=sbl.fits \
+               --forcereadstd --ids
+@end example
+
+@noindent
+Before looking into the measured surface brightness limits, let's review some 
important points about this call to MakeCatalog first:
+@itemize
+@item
+We are only concerned with the noise (not the signal), so we don't ask for any 
further measurements, because they can un-necessarily slow it down.
+However, MakeCatalog requires at least one column, so we'll only ask for the 
@option{--ids} column (which doesn't need any measurement!).
+The output catalog will therefore have a single row and a single column, with 
1 as its value@footnote{Recall that NoiseChisel's output is a binary image: 
0-valued pixels are noise and 1-valued pixel are signal.
+NoiseChisel doesn't identify sub-structure over the signal, this is the job of 
Segment, see @ref{Extract clumps and objects}.}.
+@item
+If we don't ask for any noise-related column (for example the signal-to-noise 
ratio column with @option{--sn}, among other noise-related columns), 
MakeCatalog is not going to read the noise standard deviation image (again, to 
speed up its operation when it is redundant).
+We are thus using the @option{--forcereadstd} option (short for ``force read 
standard deviation image'') here so it is ready for the surface brightness 
limit measurements that are written as keywords.
+@end itemize
+
+With the command below you can see all the keywords that were measured with 
the table.
+Notice the group of keywords that are under the ``Surface brightness limit 
(SBL)'' title.
+
+@example
+$ astfits sbl.fits -h1
+@end example
+
+@noindent
+Since all the keywords of interest here start with @code{SBL}, we can get a 
more cleaner view with this command.
+
+@example
+$ astfits sbl.fits -h1 | grep ^SBL
+@end example
+
+Notice how the @code{SBLSTD} has the same value as NoiseChisel's @code{MEDSTD} 
above.
+Using @code{SBLSTD}, MakeCatalog has determined the @mymath{n\sigma} surface 
brightness limiting magnitude in these header keywords.
+The multiple of @mymath{\sigma}, or @mymath{n}, is the value of the 
@code{SBLNSIG} keyword which you can change with the @option{--sfmagnsigma}.
+The surface brightness limiting magnitude within a pixel (@code{SBLNSIG}) and 
within a pixel-agnostic area of @code{SBLAREA} arcsec@mymath{^2} are stored in 
@code{SBLMAG}.
+
+@cindex SDSS
+@cindex Nanomaggy
+@cindex Zero point magnitude
+You will notice that the two surface brightness limiting magnitudes above have 
values around 3 and 4 (which is not correct!).
+This is because we haven't given a zero point magnitude to MakeCatalog, so it 
uses the default value of @code{0}.
+SDSS image pixel values are calibrated in units of ``nanomaggy'' which are 
defined to have a zero point magnitude of 22.5@footnote{From 
@url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
+So with the first command below we give the zero point value and with the 
second we can see the surface brightness limiting magnitudes with the correct 
values (around 25 and 26)
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --zeropoint=22.5 \
+               --output=sbl.fits --forcereadstd --ids
+$ astfits sbl.fits -h1 | grep ^SBL
+@end example
+
+As you see from @code{SBLNSIG} and @code{SBLAREA}, the default multiple of 
sigma is 1 and the default area is 1 arcsec@mymath{^2}.
+Usually higher values are used for these two parameters.
+Following the manual example we did above, you can ask for the multiple of 
sigma to be 3 and the area to be 25 arcsec@mymath{^2}:
+
+@example
+$ astmkcatalog r_detected.fits -hDETECTIONS --zeropoint=22.5 \
+               --output=sbl.fits --sfmagarea=25 --sfmagnsigma=3 \
+               --forcereadstd --ids
+$ astfits sbl.fits -h1 | awk '/^SBLMAG /@{print $3@}'
+26.02296
+@end example
+
+You see that the value is identical to the custom surface brightness limiting 
magnitude we measured above (a difference of @mymath{0.00114} magnitudes is 
negligible and hundreds of times larger than the typical errors in the zero 
point magnitude or magnitude measurements).
+But it is much more easier to have MakeCatalog do this measurement, because 
these values will be appended (as keywords) into your final catalog of objects 
within that image.
+
+@cartouche
+@noindent
+@strong{Custom STD for MakeCatalog's Surface brightness limit:} You can 
manually change/set the value of the @code{MEDSTD} keyword in your input STD 
image with @ref{Fits}:
+
+@example
+$ std=$(aststatistics masked.fits --sigclip-std)
+$ astfits noisechisel.fits -hSKY_STD --update=MEDSTD,$std
+@end example
+
+With this change, MakeCatalog will use your custom standard deviation for the 
surface brightness limit.
+This is necessary in scenarios where your image has multiple depths and during 
your masking, you also mask the shallow regions (as well as the detections of 
course).
+@end cartouche
+
+We have successfully measured the image's @mymath{3\sigma} surface brightness 
limiting magnitude over 25 arcsec@mymath{^2}.
+However, as discussed in @ref{Quantifying measurement limits} this value is 
just an extrapolation of the per-pixel standard deviaiton.
+Issues like correlated noise will cause the real noise over a large area to be 
different.
+So for a more robust measurement, let's use the upper-limit magnitude of 
similarly sized region.
+For more on the upper-limit magnitude, see the respective item in 
@ref{Quantifying measurement limits}.
+
+In summary, the upper-limit measurements involve randomly placing the 
footprint of an object in undetected parts of the image many times.
+This resuls in a random distribution of brightness measurements, the standard 
deviation of that distribution is then converted into magnitudes.
+To be comparable with the results above, let's make a circular aperture that 
has an area of 25 arcsec@mymath{^2} (thus with a radius of @mymath{2.82095} 
arcsec).
+
+@example
+zeropoint=22.5
+r_arcsec=2.82095
+
+## Convert the radius (in arcseconds) to pixels.
+r_pixel=$(astfits r_detected.fits --pixelscale -q \
+                  | awk '@{print '$r_arcsec'/($1*3600)@}')
+
+## Make circular aperture at pixel (100,100) position is irrelevant.
+echo "1 100 100 5 $r_pixel 0 0 1 1 1" \
+     | astmkprof --background=r_detected.fits \
+                 --clearcanvas --mforflatpix --type=uint8 \
+                 --output=lab.fits
+
+## Do the upper-limit measurement, ignoring all NoiseChisel's
+## detections as a mask for the upper-limit measurements.
+astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
+             --sfmagarea=25 --sfmagnsigma=3 --forcereadstd \
+             --valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
+             --upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
+             --upnsigma=3 --checkuplim=1 --upnum=1000 \
+             --ids --upperlimitsb
+@end example
+
+The @file{sbl.fits} catalog now contains the upper-limit surface brightness 
for a circle with an area of 25 arcsec@mymath{^2}.
+You can check the value with the command below, but the great thing is that 
now you have both the surface brightness limiting magnitude in the headers 
discussed above, and the upper-limit surface brigthness within the table.
+You can also add more profiles with different shapes and sizes if necessary.
+Of course, you can also use @option{--upperlimitsb} in your actual science 
objects and clumps to get an object-specific or clump-specific value.
+
+@example
+$ asttable sbl.fits -cUPPERLIMIT_SB
+25.9119
+@end example
+
+@cindex Random number generation
+@cindex Seed, random number generator
+@noindent
+You will get a slightly different value from the command above.
+In fact, if you run the MakeCatalog command again and look at the measured 
upper-limit surface brightness, it will be slightly different with your first 
trial!
+Please try exactly the same MakeCatalog command above a few times to see how 
it changes.
+
+This is because of the @emph{random} factor in the upper-limit measurements: 
every time you run it, different random points will be checked, resulting in a 
slightly different distribution.
+You can decrease the random scatter by increasing the number of random checks 
(for example setting @option{--upnum=100000}, compared to 1000 in the command 
above).
+But this will be slower and the results won't be exactly reproducible.
+The only way to ensure you get an identical result later is to fix the random 
number generator function and seed like the command below@footnote{You can use 
any integer for the seed. One recommendation is to run MakeCatalog without 
@option{--envseed} once and use the randomly generated seed that is printed on 
the terminal.}.
+This is a very important point regarding any statistical process involving 
random numbers, please see @ref{Generating random numbers}.
+
+@example
+export GSL_RNG_TYPE=ranlxs1
+export GSL_RNG_SEED=1616493518
+astmkcatalog lab.fits -h1 --zeropoint=$zeropoint -osbl.fits \
+             --sfmagarea=25 --sfmagnsigma=3 --forcereadstd \
+             --valuesfile=r_detected.fits --valueshdu=INPUT-NO-SKY \
+             --upmaskfile=r_detected.fits --upmaskhdu=DETECTIONS \
+             --upnsigma=3 --checkuplim=1 --upnum=1000 \
+             --ids --upperlimitsb --envseed
+@end example
+
+But where do all the random apertures of the upper-limit measurement fall on 
the image?
+It is good to actually inspect their location to get a better understanding 
for the process and also detect possible bugs/biases.
+When MakeCatalog is run with the @option{--checkuplim} option, it will print 
all the random locations and their measured brightness as a table in a file 
with the suffix @file{_upcheck.fits}.
+With the first command below you can use Gnuastro's @command{asttable} and 
@command{astscript-make-ds9-reg} to convert the successful aperture locations 
into a DS9 region file, and with the second can load the region file into the 
detections and sky-subtracted image to visually see where they are.
+
+@example
+## Create a DS9 region file from the check table (activated
+## with '--checkuplim')
+asttable lab_upcheck.fits --noblank=RANDOM_SUM \
+         | astscript-make-ds9-reg -c1,2 --mode=img \
+                                  --radius=$r_pixel
+
+## Have a look at the regions in relation with NoiseChisel's
+## detections.
+ds9 r_detected.fits[INPUT-NO-SKY] -regions load ds9.reg
+ds9 r_detected.fits[DETECTIONS] -regions load ds9.reg
+@end example
+
+In this example, we were looking at a single-exposure image that has no 
correlated noise.
+Because of this, the surface brightness limit and the upper-limit surface 
brightness are very close.
+They will have a bigger difference on deep datasets with stronger correlated 
noise (that are the result of stacking many individual exposures).
+As an exercise, please try measuring the upper-limit surface brightness level 
and surface brightness limit for the deep HST data that we used in the previous 
tutorial (@ref{General program usage tutorial}).
+
+@node Achieved surface brightness level, Extract clumps and objects, Image 
surface brightness limit, Detecting large extended targets
+@subsection Achieved surface brightness level
+
+In @ref{NoiseChisel optimization} we customized NoiseChisel for a 
single-exposure SDSS image of the M51 group and in @ref{Image surface 
brightness limit} we measured the surface brightness limit and the upper-limit 
surface brightness level (which are both measures of the noise level).
+In this section, let's do some measurements on the outer-most edges of the M51 
group to see how they relate to the noise measurements found in the previous 
section.
 
 @cindex Opening
-First, let's separate each detected region, or give a unique label/counter to 
all the connected pixels of NoiseChisel's detection map:
+For this measurement, we'll need to estimate the average flux on the outer 
edges of the detection.
+Fortunately all this can be done with a few simple commands using 
@ref{Arithmetic} and @ref{MakeCatalog}.
+First, let's separate each detected region, or give a unique label/counter to 
all the connected pixels of NoiseChisel's detection map with the command below.
+Recall that with the @code{set-} operator, the popped operand will be given a 
name (@code{det} in this case) for easy usage later.
 
 @example
-$ det="r_detected.fits -hDETECTIONS"
-$ astarithmetic $det 2 connected-components -olabeled.fits
+$ astarithmetic r_detected.fits -hDETECTIONS set-det \
+                det 2 connected-components -olabeled.fits
 @end example
 
 You can find the label of the main galaxy visually (by opening the image and 
hovering your mouse over the M51 group's label).
@@ -4619,8 +4844,16 @@ $ id=$(asttable cat.fits --sort=AREA_FULL --tail=1 
--column=OBJ_ID)
 $ echo $id
 @end example
 
+@noindent
+We can now use the @code{id} variable to reject all other detections:
+
+@example
+$ astarithmetic labeled.fits $id eq -oonly-m51.fits
+@end example
+
+Open the image and have a look.
 To separate the outer edges of the detections, we'll need to ``erode'' the M51 
group detection.
-We'll erode three times (to have more pixels and thus less scatter), using a 
maximum connectivity of 2 (8-connected neighbors).
+So in the same Arithmetic command as above, we'll erode three times (to have 
more pixels and thus less scatter), using a maximum connectivity of 2 
(8-connected neighbors).
 We'll then save the output in @file{eroded.fits}.
 
 @example
@@ -4631,8 +4864,7 @@ $ astarithmetic labeled.fits $id eq 2 erode 2 erode 2 
erode \
 @noindent
 In @file{labeled.fits}, we can now set all the 1-valued pixels of 
@file{eroded.fits} to 0 using Arithmetic's @code{where} operator added to the 
previous command.
 We'll need the pixels of the M51 group in @code{labeled.fits} two times: once 
to do the erosion, another time to find the outer pixel layer.
-To do this (and be efficient and more readable) we'll use the @code{set-i} 
operator.
-In the command below, it will save/set/name the pixels of the M51 group as the 
`@code{i}'.
+To do this (and be efficient and more readable) we'll use the @code{set-i} 
operator (to give this image the name `@code{i}').
 In this way we can use it any number of times afterwards, while only reading 
it from disk and finding M51's pixels once.
 
 @example
@@ -4645,58 +4877,74 @@ You'll see that the detected edge of the M51 group is 
now clearly visible.
 You can use @file{edge.fits} to mark (set to blank) this boundary on the input 
image and get a visual feeling of how far it extends:
 
 @example
-$ astarithmetic r.fits edge.fits nan where -oedge-masked.fits -h0
+$ astarithmetic r.fits -h0 edge.fits nan where -oedge-masked.fits
 @end example
 
 To quantify how deep we have detected the low-surface brightness regions (in 
units of signal to-noise ratio), we'll use the command below.
 In short it just divides all the non-zero pixels of @file{edge.fits} in the 
Sky subtracted input (first extension of NoiseChisel's output) by the pixel 
standard deviation of the same pixel.
 This will give us a signal-to-noise ratio image.
 The mean value of this image shows the level of surface brightness that we 
have achieved.
-
 You can also break the command below into multiple calls to Arithmetic and 
create temporary files to understand it better.
 However, if you have a look at @ref{Reverse polish notation} and 
@ref{Arithmetic operators}, you should be able to easily understand what your 
computer does when you run this command@footnote{@file{edge.fits} (extension 
@code{1}) is a binary (0 or 1 valued) image.
-Applying the @code{not} operator on it, just flips all its pixels.
-Through the @code{where} operator, we are setting all the newly 1-valued 
pixels in @file{r_detected.fits} (extension @code{INPUT-NO-SKY}) to NaN/blank.
-In the second line, we are dividing all the non-blank values by 
@file{r_detected.fits} (extension @code{SKY_STD}).
-This gives the signal-to-noise ratio for each of the pixels on the boundary.
+Applying the @code{not} operator on it, just flips all its pixels (from 
@code{0} to @code{1} and vice-versa).
+Using the @code{where} operator, we are then setting all the newly 1-valued 
pixels (pixels that aren't on the edge) to NaN/blank in the sky-subtracted 
input image (@file{r_detected.fits}, extension @code{INPUT-NO-SKY}, which we 
call @code{skysub}).
+We are then dividing all the non-blank pixels (only those on the edge) by the 
sky standard deviation (@file{r_detected.fits}, extension @code{SKY_STD}, which 
we called @code{skystd}).
+This gives the signal-to-noise ratio (S/N) for each of the pixels on the 
boundary.
 Finally, with the @code{meanvalue} operator, we are taking the mean value of 
all the non-blank pixels and reporting that as a single number.}.
 
 @example
-$ edge="edge.fits -h1"
-$ skystd="r_detected.fits -hSKY_STD"
-$ skysub="r_detected.fits -hINPUT-NO-SKY"
-$ astarithmetic $skysub $skystd / $edge not nan where \
-                meanvalue --quiet
+$ astarithmetic edge.fits -h1                  set-edge \
+                r_detected.fits -hSKY_STD      set-skystd \
+                r_detected.fits -hINPUT-NO-SKY set-skysub \
+                skysub skystd / edge not nan where meanvalue --quiet
 @end example
 
 @cindex Surface brightness
-We have thus detected the wings of the M51 group down to roughly 1/3rd of the 
noise level in this image! But the signal-to-noise ratio is a relative 
measurement.
+We have thus detected the wings of the M51 group down to roughly 1/3rd of the 
noise level in this image which is a very good achievement!
+But the per-pixel S/N is a relative measurement.
 Let's also measure the depth of our detection in absolute surface brightness 
units; or magnitudes per square arc-seconds (see @ref{Brightness flux 
magnitude}).
-Fortunately Gnuastro's MakeCatalog does this operation easily.
-SDSS image pixel values are calibrated in units of ``nanomaggy'', so the zero 
point magnitude is 22.5@footnote{From 
@url{https://www.sdss.org/dr12/algorithms/magnitudes}}.
+We'll also ask for the S/N and magnitude of the full edge we have defined.
+Fortunately doing this is very easy with Gnuastro's MakeCatalog:
 
 @example
-astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
-             --zeropoint=22.5 --ids --surfacebrightness
-asttable edge_cat.fits
+$ astmkcatalog edge.fits -h1 --valuesfile=r_detected.fits \
+               --zeropoint=22.5 --ids --surfacebrightness --sn \
+               --magnitude
+$ asttable edge_cat.fits
+1      25.6971       55.2406       15.8994
 @end example
 
-We have thus reached an outer surface brightness of @mymath{25.69} 
magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this 
single exposure SDSS image!
+We have thus reached an outer surface brightness of @mymath{25.70} 
magnitudes/arcsec@mymath{^2} (second column in @file{edge_cat.fits}) on this 
single exposure SDSS image!
+This is very similar to the surface brightness limit measured in @ref{Image 
surface brightness limit} (which is a big achievement!).
+But another point in the result above is very interesting: the total S/N of 
the edge is @mymath{55.24} with a total edge magnitude@footnote{You can run 
MakeCatalog on @file{only-m51.fits} instead of @file{edge.fits} to see the full 
magnitude of the M51 group in this image.} of 15.90!!!
+This very large for such a faint signal (recall that the mean S/N per pixel 
was 0.32) and shows a very important point in the study of galaxies:
+While the per-pixel signal in their outer edges may be very faint (and 
invisible to the eye in noise), a lot of signal hides deeply burried in the 
noise.
+
 In interpreting this value, you should just have in mind that NoiseChisel 
works based on the contiguity of signal in the pixels.
-Therefore the larger the object (with a similarly diffuse emission), the 
deeper NoiseChisel can carve it out of the noise.
+Therefore the larger the object, the deeper NoiseChisel can carve it out of 
the noise (for the same outer surface brightness).
 In other words, this reported depth, is the depth we have reached for this 
object in this dataset, processed with this particular NoiseChisel 
configuration.
 If the M51 group in this image was larger/smaller than this (the field of view 
was smaller/larger), or if the image was from a different instrument, or if we 
had used a different configuration, we would go deeper/shallower.
 
-To continue your analysis of such datasets with extended emission, you can use 
@ref{Segment} to identify all the ``clumps'' over the diffuse regions: 
background galaxies and foreground stars.
+
+@node Extract clumps and objects,  , Achieved surface brightness level, 
Detecting large extended targets
+@subsection Extract clumps and objects (Segmentation)
+In @ref{NoiseChisel optimization} we found a good detection map over the 
image, so pixels harboring signal have been differentiated from those that 
don't.
+For noise-related measurements like the surface brightness limit, this is fine.
+However, after finding the pixels with signal, you are most likely interested 
in knowing the sub-structure within them.
+For example how many star forming regions (those bright dots along the spiral 
arms) of M51 are within this image?
+What are the colors of each of these star forming regions?
+In the outer most wings of M51, which pixels belong to background galaxies and 
foreground stars?
+And many more similar qustions.
+To address these questions, you can use @ref{Segment} to identify all the 
``clumps'' and ``objects'' over the detection.
 
 @example
 $ astsegment r_detected.fits --output=r_segmented.fits
-$ ds9 -mecube r_segmented.fits -zscale -cmap sls -zoom to fit
+ds9 -mecube r_segmented.fits -cmap sls -zoom to fit -scale limits 0 2
 @end example
 
 @cindex DS9
 @cindex SAO DS9
-Open the output @file{r_segmented.fits} as a multi-extension data cube like 
before and flip through the first and second extensions to see the detected 
clumps (all pixels with a value larger than 1).
+Open the output @file{r_segmented.fits} as a multi-extension data cube with 
the second command above and flip through the first and second extensions, 
zoom-in to the spiral arms of M51 and see the detected clumps (all pixels with 
a value larger than 1 in the second extension).
 To optimize the parameters and make sure you have detected what you wanted, we 
recommend to visually inspect the detected clumps on the input image.
 
 For visual inspection, you can make a simple shell script like below.
@@ -4717,14 +4965,11 @@ set -u     # Stop execution when a variable is not 
initialized.
 # Default output is `$1_cat.fits'.
 astmkcatalog $1.fits --clumpscat --ids --ra --dec
 
-# Use Gnuastro's Table program to read the RA and Dec columns of the
-# clumps catalog (in the `CLUMPS' extension). Then pipe the columns
-# to AWK for saving as a DS9 region file.
-asttable $1"_cat.fits" -hCLUMPS -cRA,DEC                               \
-         | awk 'BEGIN @{ print "# Region file format: DS9 version 4.1"; \
-                        print "global color=green width=1";            \
-                        print "fk5" @}                                  \
-                @{ printf "circle(%s,%s,1\")\n", $1, $2 @}' > $1.reg
+# Use Gnuastro's Table and astscript-make-ds9-reg to build the DS9
+# region file (a circle of radius 1 arcseconds on each point).
+asttable $1"_cat.fits" -hCLUMPS -cRA,DEC \
+         | astscript-make-ds9-reg -c1,2 --mode=wcs --radius=1 \
+                                  --output=$1.reg
 
 # Show the image (with the requested color scale) and the region file.
 ds9 -geometry 1800x3000 -mecube $1.fits -zoom to fit                   \
@@ -16483,13 +16728,14 @@ For those who feel MakeCatalog's existing 
measurements/columns aren't enough and
 
 @menu
 * Detection and catalog production::  Discussing why/how to treat these 
separately
+* Brightness flux magnitude::   More on Magnitudes, surface brightness and etc.
 * Quantifying measurement limits::  For comparing different catalogs.
 * Measuring elliptical parameters::  Estimating elliptical parameters.
 * Adding new columns to MakeCatalog::  How to add new columns.
 * Invoking astmkcatalog::       Options and arguments to MakeCatalog.
 @end menu
 
-@node Detection and catalog production, Quantifying measurement limits, 
MakeCatalog, MakeCatalog
+@node Detection and catalog production, Brightness flux magnitude, 
MakeCatalog, MakeCatalog
 @subsection Detection and catalog production
 
 Most existing common tools in low-level astronomical data-analysis (for 
example 
SExtractor@footnote{@url{https://www.astromatic.net/software/sextractor}}) 
merge the two processes of detection and measurement (catalog production) in 
one program.
@@ -16537,124 +16783,155 @@ It might even be so intertwined with its 
processing, that adding new columns mig
 
 
 
-@node Quantifying measurement limits, Measuring elliptical parameters, 
Detection and catalog production, MakeCatalog
-@subsection Quantifying measurement limits
 
-@cindex Depth
-@cindex Clump magnitude limit
-@cindex Object magnitude limit
-@cindex Limit, object/clump magnitude
-@cindex Magnitude, object/clump detection limit
-No measurement on a real dataset can be perfect: you can only reach a certain 
level/limit of accuracy.
-Therefore, a meaningful (scientific) analysis requires an understanding of 
these limits for the dataset and your analysis tools: different datasets have 
different noise properties and different detection methods (one 
method/algorithm/software that is run with a different set of parameters is 
considered as a different detection method) will have different abilities to 
detect or measure certain kinds of signal (astronomical objects) and their 
properties in the dataset.
-Hence, quantifying the detection and measurement limitations with a particular 
dataset and analysis tool is the most crucial/critical aspect of any high-level 
analysis.
 
-Here, we'll review some of the most general limits that are important in any 
astronomical data analysis and how MakeCatalog makes it easy to find them.
-Depending on the higher-level analysis, there are more tests that must be 
done, but these are relatively low-level and usually necessary in most cases.
-In astronomy, it is common to use the magnitude (a unit-less scale) and 
physical units, see @ref{Brightness flux magnitude}.
-Therefore the measurements discussed here are commonly used in units of 
magnitudes.
+
+@node Brightness flux magnitude, Quantifying measurement limits, Detection and 
catalog production, MakeCatalog
+@subsection Brightness, Flux, Magnitude and Surface brightness
+
+@cindex ADU
+@cindex Gain
+@cindex Counts
+Astronomical data pixels are usually in units of counts@footnote{Counts are 
also known as analog to digital units (ADU).} or electrons or either one 
divided by seconds.
+To convert from the counts to electrons, you will need to know the instrument 
gain.
+In any case, they can be directly converted to energy or energy/time using the 
basic hardware (telescope, camera and filter) information.
+We will continue the discussion assuming the pixels are in units of 
energy/time.
 
 @table @asis
+@cindex Flux
+@cindex Luminosity
+@cindex Brightness
+@item Brightness
+The @emph{brightness} of an object is defined as its total detected energy per 
time.
+In the case of an imaged source, this is simply the sum of the pixels that are 
associated with that detection by our detection tool (for example 
@ref{NoiseChisel}@footnote{If further processing is done, for example the Kron 
or Petrosian radii are calculated, then the detected area is not sufficient and 
the total area that was within the respective radius must be used.}).
+The @emph{flux} of an object is defined in units of 
energy/time/collecting-area.
+For an astronomical target, the flux is therefore defined as its brightness 
divided by the area used to collect the light from the source: or the telescope 
aperture (for example in units of @mymath{cm^2}).
+Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can 
define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
 
-@item Surface brightness limit (of whole dataset)
-@cindex Surface brightness
-As we make more observations on one region of the sky, and add the 
observations into one dataset, the signal and noise both increase.
-However, the signal increase much faster than the noise: assuming you add 
@mymath{N} datasets with equal exposure times, the signal will increases as a 
multiple of @mymath{N}, while noise increases as @mymath{\sqrt{N}}.
-Thus this increases the signal-to-noise ratio.
-Qualitatively, fainter (per pixel) parts of the objects/signal in the image 
will become more visible/detectable.
-The noise-level is known as the dataset's surface brightness limit.
+Therefore, while flux and luminosity are intrinsic properties of the object, 
brightness depends on our detecting tools (hardware and software).
+In low-level observational astronomy data analysis, we are usually more 
concerned with measuring the brightness, because it is the thing we directly 
measure from the image pixels and create in catalogs.
+On the other hand, luminosity is used in higher-level analysis (after image 
contents are measured as catalogs to deduce physical interpretations).
+It is just important avoid possible confusion between luminosity and 
brightness because both have the same units of energy per seconds.
 
-You can think of the noise as muddy water that is completely covering a flat 
ground@footnote{The ground is the sky value in this analogy, see @ref{Sky 
value}.
-Note that this analogy only holds for a flat sky value across the surface of 
the image or ground.}.
-The signal (or astronomical objects in this analogy) will be summits/hills 
that start from the flat sky level (under the muddy water) and can sometimes 
reach outside of the muddy water.
-Let's assume that in your first observation the muddy water has just been 
stirred and you can't see anything through it.
-As you wait and make more observations/exposures, the mud settles down and the 
@emph{depth} of the transparent water increases, making the summits visible.
-As the depth of clear water increases, the parts of the hills with lower 
heights (parts with lower surface brightness) can be seen more clearly.
-In this analogy, height (from the ground) is @emph{surface 
brightness}@footnote{Note that this muddy water analogy is not perfect, because 
while the water-level remains the same all over a peak, in data analysis, the 
Poisson noise increases with the level of data.} and the height of the muddy 
water is your surface brightness limit.
+@item Magnitude
+@cindex Magnitudes from flux
+@cindex Flux to magnitude conversion
+@cindex Astronomical Magnitude system
+Images of astronomical objects span over a very large range of brightness.
+With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}} 
times brighter than the fainter galaxies we can currently detect in the deepest 
images.
+Therefore discussing brightness directly will involve a large range of values 
which is inconvenient.
+So astronomers have chosen to use a logarithmic scale to talk about the 
brightness of astronomical objects.
 
-@cindex Data's depth
-The outputs of NoiseChisel include the Sky standard deviation 
(@mymath{\sigma}) on every group of pixels (a mesh) that were calculated from 
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel 
output}.
-Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful 
meshes in the image (prior to interpolation or smoothing).
+@cindex Hipparchus of Nicaea
+But the logarithm can only be usable with a dimensionless value that is always 
positive.
+Fortunately brightness is always positive (at least in theory@footnote{In 
practice, for very faint objects, if the background brightness is 
over-subtracted, we may end up with a negative brightness in a real object.}).
+To remove the dimensions, we divide the brightness of the object (@mymath{B}) 
by a reference brightness (@mymath{B_r}).
+We then define a logarithmic scale as @mymath{magnitude} through the relation 
below.
+The @mymath{-2.5} factor in the definition of magnitudes is a legacy of the 
our ancient colleagues and in particular Hipparchus of Nicaea (190-120 BC).
 
-On different instruments, pixels have different physical sizes (for example in 
micro-meters, or spatial angle over the sky).
-Nevertheless, a pixel is our unit of data collection.
-In other words, while quantifying the noise, the physical or projected size of 
the pixels is irrelevant.
-We thus define the Surface brightness limit or @emph{depth}, in units of 
magnitude/pixel, of a data-set, with zeropoint magnitude @mymath{z}, with the 
@mymath{n}th multiple of @mymath{\sigma_m} as (see @ref{Brightness flux 
magnitude}):
+@dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
 
-@dispmath{SB_{\rm Pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z}
+@noindent
+@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the 
pre-defined magnitude of the reference brightness.
+For estimating the error in measuring a magnitude, see @ref{Quantifying 
measurement limits}.
 
-@cindex XDF survey
-@cindex CANDELS survey
-@cindex eXtreme Deep Field (XDF) survey
-As an example, the XDF survey covers part of the sky that the Hubble space 
telescope has observed the most (for 85 orbits) and is consequently very small 
(@mymath{\sim4} arcmin@mymath{^2}).
-On the other hand, the CANDELS survey, is one of the widest multi-color 
surveys covering several fields (about 720 arcmin@mymath{^2}) but its deepest 
fields have only 9 orbits observation.
-The depth of the XDF and CANDELS-deep surveys in the near infrared WFC3/F160W 
filter are respectively 34.40 and 32.45 magnitudes/pixel.
-In a single orbit image, this same field has a depth of 31.32.
-Recall that a larger magnitude corresponds to less brightness.
-
-The low-level magnitude/pixel measurement above is only useful when all the 
datasets you want to use belong to one instrument (telescope and camera).
-However, you will often find yourself using datasets from various instruments 
with different pixel scales (projected pixel sizes).
-If we know the pixel scale, we can obtain a more easily comparable surface 
brightness limit in units of: magnitude/arcsec@mymath{^2}.
-Let's assume that the dataset has a zeropoint value of @mymath{z}, and every 
pixel is @mymath{p} arcsec@mymath{^2} (so @mymath{A/p} is the number of pixels 
that cover an area of @mymath{A} arcsec@mymath{^2}).
-If the surface brightness is desired at the @mymath{n}th multiple of 
@mymath{\sigma_m}, the following equation (in units of magnitudes per 
@mymath{A} arcsec@mymath{^2}) can be used:
+@item Zero point
+@cindex Zero point magnitude
+@cindex Magnitude zero point
+A unique situation in the magnitude equation above occurs when the reference 
brightness is unity (@mymath{B_r=1}).
+This brightness will thus summarize all the hardware-specific parameters 
discussed above (like the conversion of pixel values to physical units) into 
one number.
+That reference magnitude is commonly known as the @emph{Zero point} magnitude 
because when @mymath{B=B_r=1}, the right side of the magnitude definition above 
will be zero.
+Using the zero point magnitude (@mymath{Z}), we can write the magnitude 
relation above in a more simpler format:
+
+@dispmath{m = -2.5\log_{10}(B) + Z}
 
-@dispmath{SB_{\rm Projected}=-2.5\times\log_{10}{\left(n\sigma_m\sqrt{A\over 
p}\right)+z}}
+@cindex Janskys (Jy)
+@cindex AB magnitude
+@cindex Magnitude, AB
+Having the zero point of an image, you can convert its pixel values to 
physical units of microJanskys (or @mymath{\mu{}Jy}) to enable direct 
pixel-based comparisons with images from other instruments@footnote{Comparing 
data from different instruments assumes instrument and observation signatures 
are properly corrected, things like the flat-field or the Sky}.
+Jansky is a commonly used unit for measuring spectral flux density and one 
Jansky is equivalent to @mymath{10^{-26} W/m^2/Hz} (watts per square meter per 
hertz).
+
+This conversion can be done with the fact that in the AB magnitude 
standard@footnote{@url{https://en.wikipedia.org/wiki/AB_magnitude}}, 
@mymath{3631Jy} corresponds to the zero-th magnitude, therefore 
@mymath{B\equiv3631\times10^{6}\mu{Jy}} and @mymath{m\equiv0}.
+We can therefore estimate the brightness (@mymath{B_z}, in @mymath{\mu{Jy}}) 
corresponding to the image zero point (@mymath{Z}) using this equation:
 
-The @mymath{\sqrt{A/p}} term comes from the fact that noise is added in RMS: 
if you add three datasets with noise @mymath{\sigma_1}, @mymath{\sigma_2} and 
@mymath{\sigma_3}, the resulting noise level is 
@mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when 
@mymath{\sigma_1=\sigma_2=\sigma_3=\sigma}, then 
@mymath{\sigma_t=\sqrt{3}\sigma}.
-As mentioned above, there are @mymath{A/p} pixels in the area @mymath{A}.
-Therefore, as @mymath{A/p} increases, the surface brightness limiting 
magnitude will become brighter.
+@dispmath{m - Z = -2.5\log_{10}(B/B_z)}
+@dispmath{0 - Z = -2.5\log_{10}({3631\times10^{6}\over B_z})}
+@dispmath{B_z = 3631\times10^{\left(6 - {Z \over 2.5} \right)} \mu{Jy}}
 
-It is just important to understand that the surface brightness limit is the 
raw noise level, @emph{not} the signal-to-noise.
-To get a feeling for it you can try these commands on any FITS image (let's 
assume its called @file{image.fits}), the output of the first command 
(@file{zero.fits}) will be the same size as the input, but all pixels will have 
a value of zero.
-We then add an ideal noise to this image and warp it to a new pixel size (such 
that the area of the new pixels is @code{area_per_pixel} times the input's), 
then we print the standard deviation of the raw noise and warped noise.
-Please open the output images an compare them (their sizes, or their pixel 
values) to get a good feeling of what is going on.
-Just note that this demo only works when @code{area_per_pixel} is larger than 
one.
+@cindex SDSS
+Because the image zero point corresponds to a pixel value of @mymath{1}, the 
@mymath{B_z} value calculated above also corresponds to a pixel value of 
@mymath{1}.
+Therefore you simply have to multiply your image by @mymath{B_z} to convert it 
to @mymath{\mu{Jy}}.
+Don't forget that this only applies when your zero point was also estimated in 
the AB magnitude system.
+On the command-line, you can estimate this value for a certain zero point with 
AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
+For example let's assume you are using an SDSS image with a zero point of 22.5:
 
 @example
-area_per_pixel=25
-scale=$(echo $area_per_pixel | awk '@{print sqrt($1)@}')
-astarithmetic image.fits -h0 nan + isblank not -ozero.fits
-astmknoise zero.fits -onoise.fits
-astwarp --scale=1/$scale,1/$scale noise.fits -onoise-w.fits
-std_raw=$(aststatistics noise.fits --std)
-std_warped=$(aststatistics noise-w.fits --std)
-echo;
-echo "(warped pixel area) = $area_per_pixel x (pixel area)"
-echo "Raw STD:    $std_raw"
-echo "Warped STD: $std_warped"
+bz=$(echo 22.5 | awk '@{print 3631 * 10^(6-$1/2.5)@}')
+astarithmetic sdss.fits $bz x --output=sdss-in-muJy.fits
 @end example
 
-As you see in this example, this is thus just an extrapolation of the 
per-pixel measurement @mymath{\sigma_m}.
-So it should be used with extreme care: for example the dataset must have an 
approximately flat depth or noise properties overall.
-A more accurate measure for each detection is known as the @emph{upper-limit 
magnitude} which actually uses random positioning of each detection's 
area/footprint, see the respective item below.
-The upper-limit magnitude doesn't extrapolate and even accounts for correlated 
noise patterns in relation to that detection.
-Therefore, the upper-limit magnitude is a much better measure of your 
dataset's surface brightness limit for each particular object.
+@noindent
+But in Gnuastro, it gets even easier: Arithmetic has an operator called 
@code{counts-to-jy}.
+This will directly convert your image pixels (in units of counts) to Janskys 
though a provided AB Magnitude-based zero point like below.
+See @ref{Arithmetic operators} for more.
 
-MakeCatalog will calculate the input dataset's @mymath{SB_{\rm Pixel}} and 
@mymath{SB_{\rm Projected}} and write them as comments/meta-data in the output 
catalog(s).
-Just note that @mymath{SB_{\rm Projected}} is only calculated if the input has 
World Coordinate System (WCS).
+@example
+$ astarithmetic sdss.fits 22.5 counts-to-jy
+@end example
 
-@item Completeness limit (of each detection)
-@cindex Completeness
-As the surface brightness of the objects decreases, the ability to detect them 
will also decrease.
-An important statistic is thus the fraction of objects of similar morphology 
and brightness that will be identified with our detection algorithm/parameters 
in the given image.
-This fraction is known as completeness.
-For brighter objects, completeness is 1: all bright objects that might exist 
over the image will be detected.
-However, as we go to objects of lower overall surface brightness, we will fail 
to detect some, and gradually we are not able to detect anything any more.
-For a given profile, the magnitude where the completeness drops below a 
certain level (usually above @mymath{90\%}) is known as the completeness limit.
+@item Surface brightness
+@cindex Steradian
+@cindex Angular coverage
+@cindex Celestial sphere
+@cindex Surface brightness
+@cindex SI (International System of Units)
+Another important concept is the distribution of an object's brightness over 
its area.
+For this, we define the @emph{surface brightness} to be the magnitude of an 
object's brightness divided by its solid angle over the celestial sphere (or 
coverage in the sky, commonly in units of arcsec@mymath{^2}).
+The solid angle is expressed in units of arcsec@mymath{^2} because 
astronomical targets are usually much smaller than one steradian.
+Recall that the steradian is the dimension-less SI unit of a solid angle and 1 
steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial 
sphere.
 
-@cindex Purity
-@cindex False detections
-@cindex Detections false
-Another important parameter in measuring completeness is purity: the fraction 
of true detections to all true detections.
-In effect purity is the measure of contamination by false detections: the 
higher the purity, the lower the contamination.
-Completeness and purity are anti-correlated: if we can allow a large number of 
false detections (that we might be able to remove by other means), we can 
significantly increase the completeness limit.
+Surface brightness is therefore most commonly expressed in units of 
mag/arcsec@mymath{2}.
+For example when the brightness is measured over an area of A 
arcsec@mymath{^2}, then the surface brightness becomes:
 
-One traditional way to measure the completeness and purity of a given sample 
is by embedding mock profiles in regions of the image with no detection.
-However in such a study we must be really careful to choose model profiles as 
similar to the target of interest as possible.
+@dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
+
+@noindent
+In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is 
related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units 
of arcsec@mymath{^2}) through this equation:
+
+@dispmath{S = m + 2.5\log_{10}(A)}
+
+A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and 
divide the object's magnitude by its area.
+But this is wrong because magnitude is a logarithmic scale while area is 
linear.
+It is the brightness that should be divided by the solid angle because both 
have linear scales.
+The magnitude of that ratio is then defined to be the surface brightness.
+@end table
+
+
+
+
+
+
+@node Quantifying measurement limits, Measuring elliptical parameters, 
Brightness flux magnitude, MakeCatalog
+@subsection Quantifying measurement limits
+
+@cindex Depth
+@cindex Clump magnitude limit
+@cindex Object magnitude limit
+@cindex Limit, object/clump magnitude
+@cindex Magnitude, object/clump detection limit
+No measurement on a real dataset can be perfect: you can only reach a certain 
level/limit of accuracy and a meaningful (scientific) analysis requires an 
understanding of these limits.
+Different datasets have different noise properties and different detection 
methods (one method/algorithm/software that is run with a different set of 
parameters is considered as a different detection method) will have different 
abilities to detect or measure certain kinds of signal (astronomical objects) 
and their properties in the dataset.
+Hence, quantifying the detection and measurement limitations with a particular 
dataset and analysis tool is the most crucial/critical aspect of any high-level 
analysis.
+
+Here, we'll review some of the most commonly used methods to quantify the 
limits in astronomical data analysis and how MakeCatalog makes it easy to 
measure them.
+Depending on the higher-level analysis, there are more tests that must be 
done, but these are relatively low-level and usually necessary in most cases.
+In astronomy, it is common to use the magnitude (a unit-less scale) and 
physical units, see @ref{Brightness flux magnitude}.
+Therefore the measurements discussed here are commonly used in units of 
magnitudes.
+
+@table @asis
 
 @item Magnitude measurement error (of each detection)
-Any measurement has an error and this includes the derived magnitude for an 
object.
-Note that this value is only meaningful when the object's magnitude is 
brighter than the upper-limit magnitude (see the next items in this list).
+The raw error in measuring the magnitude is only meaningful when the object's 
magnitude is brighter than the upper-limit magnitude (see below).
 As discussed in @ref{Brightness flux magnitude}, the magnitude (@mymath{M}) of 
an object with brightness @mymath{B} and zero point magnitude @mymath{z} can be 
written as:
 
 @dispmath{M=-2.5\log_{10}(B)+z}
@@ -16677,38 +16954,135 @@ But, @mymath{\Delta{B}/B} is just the inverse of the 
Signal-to-noise ratio (@mym
 MakeCatalog uses this relation to estimate the magnitude errors.
 The signal-to-noise ratio is calculated in different ways for clumps and 
objects (see @url{https://arxiv.org/abs/1505.01664, Akhlaghi and Ichikawa 
[2015]}), but this single equation can be used to estimate the measured 
magnitude error afterwards for any type of target.
 
+@item Completeness limit (of each detection)
+@cindex Completeness
+As the surface brightness of the objects decreases, the ability to detect them 
will also decrease.
+An important statistic is thus the fraction of objects of similar morphology 
and brightness that will be detected with our detection algorithm/parameters in 
a given image.
+This fraction is known as @emph{completeness}.
+For brighter objects, completeness is 1: all bright objects that might exist 
over the image will be detected.
+However, as we go to objects of lower overall surface brightness, we will fail 
to detect a fraction of them, and fainter than a certain surface brightness 
level (for each morphology),nothing will be detectable in the image: you will 
need more data to construct a ``deeper'' image.
+For a given profile and dataset, the magnitude where the completeness drops 
below a certain level (usually above @mymath{90\%}) is known as the 
completeness limit.
+
+@cindex Purity
+@cindex False detections
+@cindex Detections false
+Another important parameter in measuring completeness is purity: the fraction 
of true detections to all true detections.
+In effect purity is the measure of contamination by false detections: the 
higher the purity, the lower the contamination.
+Completeness and purity are anti-correlated: if we can allow a large number of 
false detections (that we might be able to remove by other means), we can 
significantly increase the completeness limit.
+
+One traditional way to measure the completeness and purity of a given sample 
is by embedding mock profiles in regions of the image with no detection.
+However in such a study we must be really careful to choose model profiles as 
similar to the target of interest as possible.
+
+
+
 @item Upper limit magnitude (of each detection)
 Due to the noisy nature of data, it is possible to get arbitrarily low values 
for a faint object's brightness (or arbitrarily high @emph{magnitudes}).
 Given the scatter caused by the dataset's noise, values fainter than a certain 
level are meaningless: another similar depth observation will give a radically 
different value.
 
-For example, while the depth of the image is 32 magnitudes/pixel, a 
measurement that gives a magnitude of 36 for a @mymath{\sim100} pixel object is 
clearly unreliable.
-In another similar depth image, we might measure a magnitude of 30 for it, and 
yet another might give 33.
-Furthermore, due to the noise scatter so close to the depth of the data-set, 
the total brightness might actually get measured as a negative value, so no 
magnitude can be defined (recall that a magnitude is a base-10 logarithm).
-This problem usually becomes relevant when the detection labels were not 
derived from the values being measured (for example when you are estimating 
colors, see @ref{MakeCatalog}).
+For example, assume that you have done your detection and segmentation on one 
filter and now you do measurements over the same labeled regions, but on other 
filters to measure colors (as we did in the tutorial @ref{Segmentation and 
making a catalog}).
+Some objects are not going to have any significant signal in the other 
filters, but for example, you measure magnitude of 36 for one of them!
+This is clearly unreliable (no dataset in current astronomy is able to detect 
such a faint signal).
+In another image with the same depth, using the same filter, you might measure 
a magnitude of 30 for it, and yet another might give you 33.
+Furthermore, the total brightness might actually be negative in some images of 
the same depth (due to noise).
+In these cases, no magnitude can be defined and MakeCatalog will place a NaN 
there (recall that a magnitude is a base-10 logarithm).
 
 @cindex Upper limit magnitude
 @cindex Magnitude, upper limit
 Using such unreliable measurements will directly affect our analysis, so we 
must not use the raw measurements.
-But how can we know how reliable a measurement on a given dataset is?
+When approaching the limits of your detection method, it is therefore 
important to be able to identify such cases.
+But how can we know how reliable a measurement of one object on a given 
dataset is?
 
-When we confront such unreasonably faint magnitudes, there is one thing we can 
deduce: that if something actually exists here (possibly buried deep under the 
noise), it's inherent magnitude is fainter than an @emph{upper limit magnitude}.
-To find this upper limit magnitude, we place the object's footprint 
(segmentation map) over random parts of the image where there are no 
detections, so we only have pure (possibly correlated) noise, along with 
undetected objects.
+When we confront such unreasonably faint magnitudes, there is one thing we can 
deduce: that if something actually exists under our labeled pixels (possibly 
buried deep under the noise), it's inherent magnitude is fainter than an 
@emph{upper limit magnitude}.
+To find this upper limit magnitude, we place the object's footprint 
(segmentation map) over a random part of the image where there are no 
detections, and measure the total brightness within the footprint.
 Doing this a large number of times will give us a distribution of brightness 
values.
-The standard deviation (@mymath{\sigma}) of that distribution can be used to 
quantify the upper limit magnitude.
+The standard deviation (@mymath{\sigma}) of that distribution can be used to 
quantify the upper limit magnitude for that particular object (given its 
particular shape and area):
+
+@dispmath{M_{up,n\sigma}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad 
[mag/target]}
 
 @cindex Correlated noise
-Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds).
-Hence, the upper limit was like the depth discussed above: one value for the 
whole image.
+Traditionally, faint/small object photometry was done using fixed circular 
apertures (for example with a diameter of @mymath{N} arc-seconds) and there 
wasn't much processing involved (to make a deep stack).
+Hence, the upper limit was synonymous with the surface brightness limit 
discussed above: one value for the whole image.
 The problem with this simplified approach is that the number of pixels in the 
aperture directly affects the final distribution and thus magnitude.
-Also the image correlated noise might actually create certain patters, so the 
shape of the object can also affect the final result result.
-Fortunately, with the much more advanced hardware and software of today, we 
can make customized segmentation maps for each object.
+Also the image correlated noise might actually create certain patterns, so the 
shape of the object can also affect the final result.
+Fortunately, with the much more advanced hardware and software of today, we 
can make customized segmentation maps (footprint) for each object and have 
enough computing power to actually place that footprint over many random places.
+As a result, the per-target upper-limit magnitude and general surface 
brightness limit have diverged.
 
-When requested, MakeCatalog will randomly place each target's footprint over 
the dataset as described above and estimate the resulting distribution's 
properties (like the upper limit magnitude).
+When any of the upper-limit-related columns requested, MakeCatalog will 
randomly place each target's footprint over the undetected parts of the dataset 
as described above, and estimate the required properties.
 The procedure is fully configurable with the options in @ref{Upper-limit 
settings}.
-If one value for the whole image is required, you can either use the surface 
brightness limit above or make a circular aperture and feed it into MakeCatalog 
to request an upper-limit magnitude for it@footnote{If you intend to make 
apertures manually and not use a detection map (for example from 
@ref{Segment}), don't forget to use the @option{--upmaskfile} to give 
NoiseChisel's output (or any a binary map, marking detected pixels, see 
@ref{NoiseChisel output}) as a mask.
-Otherwise, the footprints may randomly fall over detections, giving highly 
skewed distributions, with wrong upper-limit distributions.
-See The description of @option{--upmaskfile} in @ref{Upper-limit settings} for 
more.}.
+You can get the full list of upper-limit related columns of MakeCatalog with 
this command (the extra @code{--} before @code{--upperlimit} is 
necessary@footnote{Without the extra @code{--}, grep will assume that 
@option{--upperlimit} is one of its own options, and will thus abort, 
complaining that it has no option with this name.}):
+
+@example
+$ astmkcatalog --help | grep -- --upperlimit
+@end example
+
+@item Surface brightness limit (of whole dataset)
+@cindex Surface brightness
+As we make more observations on one region of the sky, and add/combine the 
observations into one dataset, the signal increases much faster than the noise:
+Assuming you add @mymath{N} datasets with equal exposure times, the signal 
will increases as a multiple of @mymath{N}, while noise increases as 
@mymath{\sqrt{N}}.
+Therefore the signal-to-noise ratio increases by a factor of @mymath{\sqrt{N}}.
+Qualitatively, fainter (per pixel) parts of the objects/signal in the image 
will become more visible/detectable.
+The noise-level is known as the dataset's surface brightness limit.
+
+@cindex Data's depth
+The outputs of NoiseChisel include the Sky standard deviation 
(@mymath{\sigma}) on every group of pixels (a tile) that were calculated from 
the undetected pixels in each tile, see @ref{Tessellation} and @ref{NoiseChisel 
output}.
+Let's take @mymath{\sigma_m} as the median @mymath{\sigma} over the successful 
meshes in the image (prior to interpolation or smoothing).
+It is recorded in the @code{MEDSTD} keyword of the @code{SKY_STD} extension of 
NoiseChisel's output.
+
+@cindex ACS camera
+@cindex Surface brightness limit
+@cindex Limit, surface brightness
+On different instruments, pixels cover different spatial angles over the sky.
+For example, the width of each pixel on the ACS camera on the Hubble Space 
Telescope (HST) is roughly 0.05 seconds of arc, while the pixels of SDSS are 
each 0.396 seconds of arc (almost eight times wider@footnote{Ground-based 
instruments like the SDSS suffer from strong smoothing due to the atmosphere.
+Therefore, increasing the pixel resolution (or decreasing the width of a 
pixel) won't increase the received information).}).
+Nevertheless, irrespective of its sky coverage, a pixel is our unit of data 
collection.
+
+To start with, we define the low-level Surface brightness limit or 
@emph{depth}, in units of magnitude/pixel with the equation below (assuming the 
image has zero point magnitude @mymath{z} and we want the @mymath{n}th multiple 
of @mymath{\sigma_m}).
 
+@dispmath{SB_{n\sigma,\rm pixel}=-2.5\times\log_{10}{(n\sigma_m)}+z \quad\quad 
[mag/pixel]}
+
+@cindex XDF survey
+@cindex CANDELS survey
+@cindex eXtreme Deep Field (XDF) survey
+As an example, the XDF survey covers part of the sky that the HST has observed 
the most (for 85 orbits) and is consequently very small (@mymath{\sim4} minutes 
of arc, squared).
+On the other hand, the CANDELS survey, is one of the widest multi-color 
surveys done by the HST covering several fields (about 720 arcmin@mymath{^2}) 
but its deepest fields have only 9 orbits observation.
+The @mymath{1\sigma} depth of the XDF and CANDELS-deep surveys in the near 
infrared WFC3/F160W filter are respectively 34.40 and 32.45 magnitudes/pixel.
+In a single orbit image, this same field has a @mymath{1\sigma} depth of 31.32 
magnitudes/pixel.
+Recall that a larger magnitude corresponds to less brightness, see 
@ref{Brightness flux magnitude}.
+
+@cindex Pixel scale
+The low-level magnitude/pixel measurement above is only useful when all the 
datasets you want to use, or compare, have the same pixel size.
+However, you will often find yourself using, or comparing, datasets from 
various instruments with different pixel scales (projected pixel width, in 
arc-seconds).
+If we know the pixel scale, we can obtain a more easily comparable surface 
brightness limit in units of: magnitude/arcsec@mymath{^2}.
+But another complication is that astronomical objects are usually larger than 
1 arcsec@mymath{^2}, so its common to measure the surface brightness depth over 
a larger (but fixed, depending on context) area.
+
+Let's assume that every pixel is @mymath{p} arcsec@mymath{^2} and we want the 
surface brightness limit for an object covering A arcsec@mymath{^2} (so 
@mymath{A/p} is the number of pixels that cover an area of @mymath{A} 
arcsec@mymath{^2}).
+On the other hand, noise is added in RMS@footnote{If you add three datasets 
with noise @mymath{\sigma_1}, @mymath{\sigma_2} and @mymath{\sigma_3}, the 
resulting noise level is 
@mymath{\sigma_t=\sqrt{\sigma_1^2+\sigma_2^2+\sigma_3^2}}, so when 
@mymath{\sigma_1=\sigma_2=\sigma_3\equiv\sigma}, then 
@mymath{\sigma_t=\sigma\sqrt{3}}.
+In this case, the area @mymath{A} is covered by @mymath{A/p} pixels, so the 
noise level is @mymath{\sigma_t=\sigma\sqrt{A/p}}.}, hence the noise level in 
@mymath{A} arcsec@mymath{^2} is @mymath{n\sigma_m\sqrt{A/p}}.
+But we want the result in units of arcsec@mymath{^2}, so we should divide this 
by @mymath{A} arcsec@mymath{^2}:
+@mymath{n\sigma_m\sqrt{A/p}/A=n\sigma_m\sqrt{A/(pA^2)}=n\sigma_m/\sqrt{pA}}.
+Plugging this into the magnitude equation, we get the @mymath{n\sigma} surface 
brightness limit, over an area of A arcsec@mymath{^2}, in units of 
magnitudes/arcsec@mymath{^2}:
+
+@dispmath{SB_{{n\sigma,\rm A 
arcsec}^2}=-2.5\times\log_{10}{\left(n\sigma_m\over \sqrt{pA}\right)+z} 
\quad\quad [mag/arcsec^2]}
+
+@cindex World Coordinate System (WCS)
+MakeCatalog will calculate the input dataset's @mymath{SB_{n\sigma,\rm pixel}} 
and @mymath{SB_{{n\sigma,\rm A arcsec}^2}} and will write them as 
comments/meta-data in the output catalog(s)@footnote{Information on the general 
surface brightness limit is written in the @code{COMMENT} keywords if the 
requested output is a FITS table, or lines starting with a @code{#} when the 
output is plain-text.}, see @ref{Image surface brightness limit}.
+You can set your desired @mymath{n}-th multiple of @mymath{\sigma} and the 
@mymath{A} arcsec@mymath{^2} area using the following two options respectively: 
@option{--sfmagnsigma} and @option{--sfmagarea} (see @ref{MakeCatalog output}).
+Just note that @mymath{SB_{{n\sigma,\rm A arcsec}^2}} is only calculated if 
the input has World Coordinate System (WCS).
+Without WCS, the pixel scale cannot be derived.
+
+@cindex Correlated noise
+@cindex Noise, correlated
+As you saw in its derivation, the calculation above extrapolates the noise in 
one pixel over all the input's pixels!
+It therefore implicitly assumes that the noise is the same in all of the 
pixels.
+But this only happens in individual exposures: reduced data will have 
correlated noise because they are a stack of many individual exposures that 
have been warped (thus mixing the pixel values).
+A more accurate measure which will provide a realistic value for every labeled 
region is known as the @emph{upper-limit magnitude}, which is discussed below.
+
+
+@item Upper limit magnitude (of full dataset)
+As mentioned above, the upper-limit magnitude will depend on the shape of each 
object's footprint.
+Therefore we can measure the dataset's upper-limit magnitude using standard 
shapes.
+Traditionally a circular aperture of a fixed size (in arcseconds) has been 
used.
+For a full example of implementing this, see @ref{Image surface brightness 
limit}.
 @end table
 
 
@@ -17075,6 +17449,8 @@ The dataset given to @option{--stdfile} (and 
@option{--stdhdu} has the Sky varia
 Read the input STD image even if it is not required by any of the requested 
columns.
 This is because some of the output catalog's metadata may need it, for example 
to calculate the dataset's surface brightness limit (see @ref{Quantifying 
measurement limits}, configured with @option{--sfmagarea} and 
@option{--sfmagnsigma} in @ref{MakeCatalog output}).
 
+Furthermore, if the input STD image doesn't have the @code{MEDSTD} keyword 
(that is meant to contain the representative standard deviation of the full 
image), with this option, the median will be calculated and used for the 
surface brightness limit.
+
 @item -z FLT
 @itemx --zeropoint=FLT
 The zero point magnitude for the input image, see @ref{Brightness flux 
magnitude}.
@@ -17442,6 +17818,7 @@ For now these factors have to be found by other means.
 
 @item --upperlimit
 The upper limit value (in units of the input image) for this object or clump.
+This is the sigma-clipped standard deviation of the random distribution, 
multiplied by the value of @option{--upnsigma}).
 See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
 This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
@@ -17450,6 +17827,10 @@ The upper limit magnitude for this object or clump.
 See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
 This is very important for the fainter and smaller objects in the image where 
the measured magnitudes are not reliable.
 
+@item --upperlimitsb
+The upper-limit surface brightness (in units of mag/arcsec@mymath{^2}) of this 
labeled region (object or clump).
+This is just a simple wrapper over lower-level columns: setting B and A as the 
value in the columns @option{--upperlimit} and @option{--areaarcsec2}, we fill 
this column by simply use the surface brightness equation of @ref{Brightness 
flux magnitude}.
+
 @item --upperlimitonesigma
 The @mymath{1\sigma} upper limit value (in units of the input image) for this 
object or clump.
 See @ref{Quantifying measurement limits} and @ref{Upper-limit settings} for a 
complete explanation.
@@ -17723,13 +18104,48 @@ If an output filename is given (see @option{--output} 
in @ref{Input output optio
 When it isn't given, the input name will be appended with a @file{_cat} suffix 
(see @ref{Automatic output}) and its format will be determined from the 
@option{--tableformat} option, which is also discussed in @ref{Input output 
options}.
 @option{--tableformat} is also necessary when the requested output name is a 
FITS table (recall that FITS can accept ASCII and binary tables, see 
@ref{Table}).
 
-By default (when @option{--spectrum} isn't called) only a single catalog/table 
will be created for ``objects'', however, if @option{--clumpscat} is called, a 
secondary catalog/table will also be created.
-For more on ``objects'' and ``clumps'', see @ref{Segment}.
-In short, if you only have one set of labeled images, you don't have to worry 
about clumps (they are deactivated by default).
+By default (when @option{--spectrum} or @option{--clumpscat} aren't called) 
only a single catalog/table will be created for the labeled ``objects''.
+
+@itemize
+@item
+if @option{--clumpscat} is called, a secondary catalog/table will also be 
created for ``clumps'' (one of the outputs of the Segment program, for more on 
``objects'' and ``clumps'', see @ref{Segment}).
+In short, if you only have one labeled image, you don't have to worry about 
clumps and just ignore this.
+@item
+When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region within a 3D datacube.
+For more, see the description of @option{--spectrum} in @ref{MakeCatalog 
measurements}.
+@end itemize
+
+@cindex Surface brightness limit
+@cindex Limit, Surface brightness
+When possible, MakeCatalog will also measure the full input's noise level 
(also known as surface brightness limit, see @ref{Quantifying measurement 
limits}).
+Since these measurements are related to the noise and not any particular 
labeled object, they are stored as keywords in the output table.
+Furthermore, they are only possible when a standard deviation image has been 
loaded (done automatically for any column measurement that involves noise, for 
example @option{--sn} or @option{--std}).
+But if you just want the surface brightness limit and no noise-related column, 
you can use @option{--forcereadstd}.
+All these keywords start with @code{SBL} (for ``surface brightness limit'') 
and are described below:
+
+@table @code
+@item SBLSTD
+Per-pixel standard deviation.
+If a @code{MEDSTD} keyword exists in the standard deviation dataset, then that 
value is directly used.
+
+@item SBLNSIG
+Sigma multiple for surface brightness limit (value you gave to 
@option{--sfmagnsigma}), used for @code{SBLMAGPX} and @code{SBLMAG}.
+
+@item SBLMAGPX
+Per-pixel surface brightness limit (in units of magnitudes/pixel).
 
-When @option{--spectrum} is called, it is not mandatory to specify any 
single-valued measurement columns. In this case, the output will only be the 
spectra of each labeled region. See the description of @option{--spectrum} in 
@ref{MakeCatalog measurements}.
+@item SBLAREA
+Area (in units of arcsec@mymath{^2}) used in @code{SBLMAG} (value you gave to 
@option{--sfmagarea}).
 
-The full list of MakeCatalog's output options are elaborated below.
+@item SBLMAG
+Surface brightness limit of data calculated over @code{SBLAREA} (in units of 
mag/arcsec@mymath{^2}).
+@end table
+
+When any of the upper-limit measurements are requested, the input parameters 
for the upper-limit measurement are stored in the keywords starting with 
@code{UP}: @code{UPNSIGMA}, @code{UPNUMBER}, @code{UPRNGNAM}, @code{UPRNGSEE}, 
@code{UPSCMLTP}, @code{UPSCTOL}.
+These are primarily input arguments, so they correspond to the options with a 
similar name.
+
+The full list of MakeCatalog's options relating to the output file format and 
keywords are listed below.
+See @ref{MakeCatalog measurements} for specifying which columns you want in 
the final catalog.
 
 @table @option
 @item -C
@@ -18112,7 +18528,6 @@ After all the transformations are applied, using the 
WCS information in each ind
 @menu
 * Modeling basics::             Astronomical modeling basics.
 * If convolving afterwards::    Considerations for convolving later.
-* Brightness flux magnitude::   About these measures of energy.
 * Profile magnitude::           Definition of total profile magnitude.
 * Invoking astmkprof::          Inputs and Options for MakeProfiles.
 @end menu
@@ -18429,7 +18844,7 @@ This is because unlike the PSF, the noise occurs in 
each output pixel, not on a
 
 
 
-@node If convolving afterwards, Brightness flux magnitude, Modeling basics, 
MakeProfiles
+@node If convolving afterwards, Profile magnitude, Modeling basics, 
MakeProfiles
 @subsection If convolving afterwards
 
 In case you want to convolve the image later with a given point spread 
function, make sure to use a larger image size.
@@ -18445,120 +18860,13 @@ To facilitate this shift, MakeProfiles has the 
options @option{--xshift}, @optio
 
 
 
-@node Brightness flux magnitude, Profile magnitude, If convolving afterwards, 
MakeProfiles
-@subsection Brightness, Flux, Magnitude and Surface brightness
-
-@cindex ADU
-@cindex Gain
-@cindex Counts
-Astronomical data pixels are usually in units of counts@footnote{Counts are 
also known as analog to digital units (ADU).} or electrons or either one 
divided by seconds.
-To convert from the counts to electrons, you will need to know the instrument 
gain.
-In any case, they can be directly converted to energy or energy/time using the 
basic hardware (telescope, camera and filter) information.
-We will continue the discussion assuming the pixels are in units of 
energy/time.
-
-@cindex Flux
-@cindex Luminosity
-@cindex Brightness
-The @emph{brightness} of an object is defined as its total detected energy per 
time.
-In the case of an imaged source, this is simply the sum of the pixels that are 
associated with that detection by our detection tool (for example 
@ref{NoiseChisel}@footnote{If further processing is done, for example the Kron 
or Petrosian radii are calculated, then the detected area is not sufficient and 
the total area that was within the respective radius must be used.}).
-The @emph{flux} of an object is defined in units of 
energy/time/collecting-area.
-For an astronomical target, the flux is therefore defined as its brightness 
divided by the area used to collect the light from the source: or the telescope 
aperture (for example in units of @mymath{cm^2}).
-Knowing the flux (@mymath{f}) and distance to the object (@mymath{r}), we can 
define its @emph{luminosity}: @mymath{L=4{\pi}r^2f}.
-
-Therefore, while flux and luminosity are intrinsic properties of the object, 
brightness depends on our detecting tools (hardware and software).
-In low-level observational astronomy data analysis, we are usually more 
concerned with measuring the brightness, because it is the thing we directly 
measure from the image pixels and create in catalogs.
-On the other hand, luminosity is used in higher-level analysis (after image 
contents are measured as catalogs to deduce physical interpretations).
-It is just important avoid possible confusion between luminosity and 
brightness because both have the same units of energy per seconds.
-
-@cindex Magnitudes from flux
-@cindex Flux to magnitude conversion
-@cindex Astronomical Magnitude system
-Images of astronomical objects span over a very large range of brightness.
-With the Sun (as the brightest object) being roughly @mymath{2.5^{60}=10^{24}} 
times brighter than the fainter galaxies we can currently detect in the deepest 
images.
-Therefore discussing brightness directly will involve a large range of values 
which is inconvenient.
-So astronomers have chosen to use a logarithmic scale to talk about the 
brightness of astronomical objects.
-
-@cindex Hipparchus of Nicaea
-But the logarithm can only be usable with a dimensionless value that is always 
positive.
-Fortunately brightness is always positive (at least in theory@footnote{In 
practice, for very faint objects, if the background brightness is 
over-subtracted, we may end up with a negative brightness in a real object.}).
-To remove the dimensions, we divide the brightness of the object (@mymath{B}) 
by a reference brightness (@mymath{B_r}).
-We then define a logarithmic scale as @mymath{magnitude} through the relation 
below.
-The @mymath{-2.5} factor in the definition of magnitudes is a legacy of the 
our ancient colleagues and in particular Hipparchus of Nicaea (190-120 BC).
-
-@dispmath{m-m_r=-2.5\log_{10} \left( B \over B_r \right)}
-
-@cindex Zero point magnitude
-@cindex Magnitude zero point
-@noindent
-@mymath{m} is defined as the magnitude of the object and @mymath{m_r} is the 
pre-defined magnitude of the reference brightness.
-One particularly easy condition is when the reference brightness is unity 
(@mymath{B_r=1}).
-This brightness will thus summarize all the hardware-specific parameters 
discussed above (like the conversion of pixel values to physical units) into 
one number.
-That reference magnitude which is commonly known as the @emph{Zero point} 
magnitude (because when @mymath{B=Br=1}, the right side of the magnitude 
definition above will be zero).
-Using the zero point magnitude (@mymath{Z}), we can write the magnitude 
relation above in a more simpler format:
-
-@dispmath{m = -2.5\log_{10}(B) + Z}
-
-@cindex Janskys (Jy)
-@cindex AB magnitude
-@cindex Magnitude, AB
-Having the zero point of an image, you can convert its pixel values to 
physical units of microJanskys (or @mymath{\mu{}Jy}) to enable direct 
pixel-based comparisons with images from other instruments (just note that this 
assumes instrument and observation signatures are corrected, things like the 
flat-field or the Sky).
-Jansky is used for measuring spectral flux density.
-One Jansky is equivalent to @mymath{10^{-26} W/m^2/Hz} (watts per square meter 
per hertz).
-
-This conversion can be done with the fact that in the AB magnitude 
standard@footnote{@url{https://en.wikipedia.org/wiki/AB_magnitude}}, 
@mymath{3631Jy} corresponds to the zero-th magnitude, therefore 
@mymath{B\equiv3631\times10^{6}\mu{Jy}} and @mymath{m\equiv0}.
-We can therefore estimate the brightness (@mymath{B_z}, in @mymath{\mu{Jy}}) 
corresponding to the image zero point (@mymath{Z}) using this equation:
-
-@dispmath{m - Z = -2.5\log_{10}(B/B_z)}
-@dispmath{0 - Z = -2.5\log_{10}({3631\times10^{6}\over B_z})}
-@dispmath{B_z = 3631\times10^{\left(6 - {Z \over 2.5} \right)} \mu{Jy}}
-
-@cindex SDSS
-Because the image zero point corresponds to a pixel value of @mymath{1}, the 
@mymath{B_z} value calculated above also corresponds to a pixel value of 
@mymath{1}.
-Therefore you simply have to multiply your image by @mymath{B_z} to convert it 
to @mymath{\mu{Jy}}.
-Don't forget that this only applies when your zero point was also estimated in 
the AB magnitude system.
-On the command-line, you can estimate this value for a certain zero point with 
AWK, then multiply it to all the pixels in the image with @ref{Arithmetic}.
-For example let's assume you are using an SDSS image with a zero point of 22.5:
-
-@example
-bz=$(echo 22.5 | awk '@{print 3631 * 10^(6-$1/2.5)@}')
-astarithmetic sdss.fits $bz x --output=sdss-in-muJy.fits
-@end example
-
-@noindent
-But in Gnuastro, it gets even easier: Arithmetic has an operator called 
@code{counts-to-jy}.
-This will directly convert your image pixels (in units of counts) to Janskys 
though a provided AB Magnitude-based zero point like below.
-See @ref{Arithmetic operators} for more.
-
-@example
-$ astarithmetic sdss.fits 22.5 counts-to-jy
-@end example
-
-@cindex Steradian
-@cindex Angular coverage
-@cindex Celestial sphere
-@cindex Surface brightness
-@cindex SI (International System of Units)
-Another important concept is the distribution of an object's brightness over 
its area.
-For this, we define the @emph{surface brightness} to be the magnitude of an 
object's brightness divided by its solid angle over the celestial sphere (or 
coverage in the sky, commonly in units of arcsec@mymath{^2}).
-The solid angle is expressed in units of arcsec@mymath{^2} because 
astronomical targets are usually much smaller than one steradian.
-Recall that the steradian is the dimension-less SI unit of a solid angle and 1 
steradian covers @mymath{1/4\pi} (almost @mymath{8\%}) of the full celestial 
sphere.
 
-Surface brightness is therefore most commonly expressed in units of 
mag/arcsec@mymath{2}.
-For example when the brightness is measured over an area of A 
arcsec@mymath{^2}, then the surface brightness becomes:
 
-@dispmath{S = -2.5\log_{10}(B/A) + Z = -2.5\log_{10}(B) + 2.5\log_{10}(A) + Z}
 
-@noindent
-In other words, the surface brightness (in units of mag/arcsec@mymath{^2}) is 
related to the object's magnitude (@mymath{m}) and area (@mymath{A}, in units 
of arcsec@mymath{^2}) through this equation:
 
-@dispmath{S = m + 2.5\log_{10}(A)}
 
-A common mistake is to follow the mag/arcsec@mymath{^2} unit literally, and 
divide the object's magnitude by its area.
-But this is wrong because magnitude is a logarithmic scale while area is 
linear.
-It is the brightness that should be divided by the solid angle because both 
have linear scales.
-The magnitude of that ratio is then defined to be the surface brightness.
 
-@node Profile magnitude, Invoking astmkprof, Brightness flux magnitude, 
MakeProfiles
+@node Profile magnitude, Invoking astmkprof, If convolving afterwards, 
MakeProfiles
 @subsection Profile magnitude
 
 @cindex Brightness
@@ -20222,7 +20530,7 @@ If you do confront such strange errors, please submit a 
bug report so we fix it
 * SAO DS9 region files from table::  Create ds9 region file from a table.
 @end menu
 
-@node Sort FITS files by night, SAO DS9 region files from table, Installed 
scripts, Installed scripts
+@node Sort FITS files by night, Generate radial profile, Installed scripts, 
Installed scripts
 @section Sort FITS files by night
 
 @cindex Calendar
@@ -20423,7 +20731,7 @@ For example @code{--prefix=/path/to/processing/img-} 
will put all the links/copi
 
 
 
-@node Generate radial profile,  , SAO DS9 region files from table, Installed 
scripts
+@node Generate radial profile, SAO DS9 region files from table, Sort FITS 
files by night, Installed scripts
 @section Generate radial profile
 
 @cindex Radial profile
@@ -20630,7 +20938,7 @@ For example, to check that the profiles generated for 
obtaining the radial profi
 
 
 
-@node SAO DS9 region files from table, Generate radial profile, Sort FITS 
files by night, Installed scripts
+@node SAO DS9 region files from table,  , Generate radial profile, Installed 
scripts
 @section SAO DS9 region files from table
 
 Once your desired catalog (containing the positions of some objects) is 
created (for example with @ref{MakeCatalog}, @ref{Match}, or @ref{Table}) it 
often happens that you want to see your selected objects on an image for a 
feeling of the spatial properties of your objects.
diff --git a/lib/fits.c b/lib/fits.c
index 10a0bac..646027a 100644
--- a/lib/fits.c
+++ b/lib/fits.c
@@ -2052,7 +2052,7 @@ gal_fits_key_write_version_in_ptr(gal_fits_list_key_t 
**keylist, char *title,
   if(gitdescribe)
     {
       fits_update_key(fptr, TSTRING, "COMMIT", gitdescribe,
-                      "Git's commit description in running dir.", &status);
+                      "Git commit in running directory.", &status);
       free(gitdescribe);
     }
 
diff --git a/lib/txt.c b/lib/txt.c
index 7f2c942..5ed3978 100644
--- a/lib/txt.c
+++ b/lib/txt.c
@@ -1513,6 +1513,11 @@ txt_write_keys(FILE *fp, struct gal_fits_list_key_t 
**keylist)
                   tmp->title);
           if(tmp->tfree) free(tmp->title);
         }
+      else if (tmp->fullcomment)
+        {
+          fprintf(fp, "# %s\n", tmp->fullcomment);
+          if(tmp->fcfree) free(tmp->fullcomment);
+        }
       else
         {
           /* For a string type, we need to return a pointer to the



reply via email to

[Prev in Thread] Current Thread [Next in Thread]