grub-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] lvm: add lvm cache logical volume handling


From: Daniel Kiper
Subject: Re: [PATCH] lvm: add lvm cache logical volume handling
Date: Wed, 23 Oct 2019 12:48:23 +0200
User-agent: NeoMutt/20170113 (1.7.2)

On Wed, Oct 16, 2019 at 06:15:30AM +0000, Michael Chang wrote:
> The lvm cache logical volume is the logical volume consisting of the
> original and the cache pool logical volume. The original is usually on a
> larger and slower storage device while the cache pool is on a smaller
> and faster one. The performance of the original volume can be improved
> by storing the frequently used data on the cache pool to utilize the
> greater performance of faster device.
>
> The default cache mode "writethrough" ensures that any data written will
> be stored both in the cache and on the origin LV, therefore grub can go
> straight to read the original lv as no data loss is guarenteed.
>
> The second cache mode is "writeback", which delays writing from the
> cache pool back to the origin LV to have increased performance. The
> drawback is potential data loss if losing the associated cache device.
>
> During the boot time grub reads the LVM offline i.e. LVM volumes are not
> activated and mounted, IMHO it should be fine to read directly from
> original lv since all cached data should have been flushed back in the
> process of taking it offline.

Is it possible to enforce all GRUB writes to the original device instead
of cache during installation process?

> Signed-off-by: Michael Chang <address@hidden>
> ---
>  grub-core/disk/lvm.c | 125 
> +++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 125 insertions(+)
>
> diff --git a/grub-core/disk/lvm.c b/grub-core/disk/lvm.c
> index 32b9e289f..a1eeaa737 100644
> --- a/grub-core/disk/lvm.c
> +++ b/grub-core/disk/lvm.c
> @@ -272,6 +272,13 @@ grub_lvm_detect (grub_disk_t disk,
>
>    if (! vg)
>      {
> +      struct cache_lv {
> +     struct grub_diskfilter_lv *lv;
> +     char *cache_pool;
> +     char *origin;
> +     struct cache_lv *next;
> +      } *cache_lvs = NULL;

I would move struct cache_lv definition to the beginning of the file.
Though you can leave variable declaration here.

> +
>        /* First time we see this volume group. We've to create the
>        whole volume group structure. */
>        vg = grub_malloc (sizeof (*vg));
> @@ -701,6 +708,72 @@ grub_lvm_detect (grub_disk_t disk,
>                         seg->nodes[seg->node_count - 1].name = tmp;
>                       }
>                   }
> +               else if (grub_memcmp (p, "cache\"",
> +                                sizeof ("cache\"") - 1) == 0)
> +                 {
> +                   struct cache_lv *cache;
> +
> +                   char *p2, *p3;
> +                   grub_ssize_t sz;
> +
> +                   cache = grub_zalloc (sizeof (*cache));
> +                   cache->lv = grub_zalloc (sizeof (*cache->lv));

You blindly assume that grub_zalloc() does not fail. This is dangerous.
Please fix it.

> +                   grub_memcpy (cache->lv, lv, sizeof (*cache->lv));
> +
> +                   if (lv->fullname)
> +                     cache->lv->fullname = grub_strdup (lv->fullname);
> +                   if (lv->idname)
> +                     cache->lv->idname = grub_strdup (lv->idname);
> +                   if (lv->name)
> +                     cache->lv->name = grub_strdup (lv->name);

Same for grub_strdup()...

> +
> +                   skip_lv = 1;
> +
> +                   if (!(p2 = grub_strstr (p, "cache_pool = \"")))
> +                     goto cache_lv_fail;
> +
> +                   if (!(p2 = grub_strchr (p2, '"')))
> +                     goto cache_lv_fail;
> +
> +                   p3 = ++p2;
> +                   if (!(p3 = grub_strchr (p3, '"')))
> +                     goto cache_lv_fail;
> +
> +                   sz = p3 - p2;
> +
> +                   cache->cache_pool = grub_malloc (sz + 1);

Same for grub_malloc() and below. Please fix all of them.

Daniel



reply via email to

[Prev in Thread] Current Thread [Next in Thread]