qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] x86: add SEV hashing to fw_cfg for kernel/initrd/cmdline


From: Eduardo Habkost
Subject: Re: [PATCH] x86: add SEV hashing to fw_cfg for kernel/initrd/cmdline
Date: Thu, 17 Jun 2021 16:35:18 -0400

On Thu, Jun 17, 2021 at 3:17 PM Dov Murik <dovmurik@linux.ibm.com> wrote:
>
>
>
> On 17/06/2021 20:22, Eduardo Habkost wrote:
> > On Thu, Jun 17, 2021 at 03:48:52PM +0300, Dov Murik wrote:
> >>
> >>
> >> On 15/06/2021 22:53, Philippe Mathieu-Daudé wrote:
> >>> Hi Dov, James,
> >>>
> >>> +Connor who asked to be reviewer.
> >>>
> >>> On 6/15/21 5:20 PM, Eduardo Habkost wrote:
> >>>> On Tue, May 25, 2021 at 06:59:31AM +0000, Dov Murik wrote:
> >>>>> From: James Bottomley <jejb@linux.ibm.com>
> >>>>>
> >>>>> If the VM is using memory encryption and also specifies a kernel/initrd
> >>>>> or appended command line, calculate the hashes and add them to the
> >>>>> encrypted data.  For this to work, OVMF must support an encrypted area
> >>>>> to place the data which is advertised via a special GUID in the OVMF
> >>>>> reset table (if the GUID doesn't exist, the user isn't allowed to pass
> >>>>> in the kernel/initrd/cmdline via the fw_cfg interface).
> >>>>>
> >>>>> The hashes of each of the files is calculated (or the string in the case
> >>>>> of the cmdline with trailing '\0' included).  Each entry in the hashes
> >>>>> table is GUID identified and since they're passed through the memcrypt
> >>>>> interface, the hash of the encrypted data will be accumulated by the
> >>>>> PSP.
> >>>>>
> >>>>> Signed-off-by: James Bottomley <jejb@linux.ibm.com>
> >>>>> Signed-off-by: Dov Murik <dovmurik@linux.ibm.com>
> >>>>> [dovmurik@linux.ibm.com: use machine->cgs, remove parsing of GUID
> >>>>> strings, remove GCC pragma, fix checkpatch errors]
> >>>>> ---
> >>>>>
> >>>>> OVMF support for handling the table of hashes (verifying that the
> >>>>> kernel/initrd/cmdline passed via the fw_cfg interface indeed correspond
> >>>>> to the measured hashes in the table) will be posted soon to edk2-devel.
> >>>>>
> >>>>> ---
> >>>>>  hw/i386/x86.c | 120 +++++++++++++++++++++++++++++++++++++++++++++++++-
> >>>>>  1 file changed, 119 insertions(+), 1 deletion(-)
> >>>>>
> >>>>
> >>>> This is not an objection to the patch itself, but: can we do
> >>>> something to move all sev-related code to sev.c?  It would make
> >>>> the process of assigning a maintainer and reviewing/merging
> >>>> future patches much simpler.
> >>>>
> >>>> I am not familiar with SEV internals, so my only question is
> >>>> about configurations where SEV is disabled:
> >>>>
> >>>> [...]
> >>>>> diff --git a/hw/i386/x86.c b/hw/i386/x86.c
> >>>>> @@ -778,6 +818,11 @@ void x86_load_linux(X86MachineState *x86ms,
> >>>>>      const char *initrd_filename = machine->initrd_filename;
> >>>>>      const char *dtb_filename = machine->dtb;
> >>>>>      const char *kernel_cmdline = machine->kernel_cmdline;
> >>>>> +    uint8_t buf[HASH_SIZE];
> >>>>> +    uint8_t *hash = buf;
> >>>>> +    size_t hash_len = sizeof(buf);
> >>>>> +    struct sev_hash_table *sev_ht = NULL;
> >>>>> +    int sev_ht_index = 0;
> >>>
> >>> Can you move all these variable into a structure, and use it as a
> >>> SEV loader context?
> >>>
> >>> Then each block of code you added can be moved to its own function,
> >>> self-described, working with the previous context.
> >>>
> >>> The functions can be declared in sev_i386.h and defined in sev.c as
> >>> Eduardo suggested.
> >>>
> >>
> >> Thanks Philippe, I'll try this approach.
> >>
> >>
> >> I assume you mean that an addition like this:
> >>
> >> +    if (sev_ht) {
> >> +        struct sev_hash_table_entry *e = &sev_ht->entries[sev_ht_index++];
> >> +
> >> +        qcrypto_hash_bytes(QCRYPTO_HASH_ALG_SHA256, (char 
> >> *)kernel_cmdline,
> >> +                           strlen(kernel_cmdline) + 1,
> >> +                           &hash, &hash_len, &error_fatal);
> >> +        memcpy(e->hash, hash, hash_len);
> >> +        e->len = sizeof(*e);
> >> +        memcpy(e->guid, sev_cmdline_entry_guid, sizeof(e->guid));
> >> +    }
> >>
> >> will be extracted to a function, and here (in x86_load_linux()) replaced
> >> with a single call:
> >>
> >>     sev_kernel_loader_calc_cmdline_hash(&sev_loader_context, 
> >> kernel_cmdline);
> >>
> >> and that function will have an empty stub in non-SEV builds, and a do-
> >> nothing condition (at the beginning) if it's an SEV-disabled VM, and
> >> will do the actual work in SEV VMs.
> >>
> >> Right?
> >
> > I would suggest a generic notification mechanism instead, where
> > SEV code could register to be notified after the kernel/initrd is
> > loaded.
> >
> > Maybe the existing qemu_add_machine_init_done_notifier()
> > mechanism would be enough for this?  Is there a reason the hash
> > calculation needs to be done inside x86_load_linux(),
> > specifically?
> >
>
> SEV already registers that callback - sev_machine_done_notify, which
> calls sev_launch_get_measure.  We want the hashes page filled and
> encrypted *before* that measurement is taken by the PSP.  We can squeeze
> in the hashes and page encryption before the measurement *if* we can
> calculate the hashes at this point.
>
> x86_load_linux already deals with kernel/initrd/cmdline, so that was the
> easiest place to add the hash calculation.
>
> It's also a bit weird (semantically) to modify the guest RAM after
> pc_memory_init has finished.

Understood.

>
>
> We can add a new notifier towards the end of x86_load_linux, but can we
> avoid passing all the different pointers+sizes of kernel/initrd/cmdline
> to that notifier?  Do you envision other uses for registering on that event?

I expect other confidential guest technologies to do something very
similar, but I don't want to make you overengineer this. The generic
mechanism was just a suggestion.

Extracting the code to a single SEV-specific function would already be
an improvement. That would make the code easier to refactor if we
decide we need a generic mechanism in the future.

-- 
Eduardo




reply via email to

[Prev in Thread] Current Thread [Next in Thread]