> However, I am not sure if the in-guest migration helper vCPUs should use the
> existing KVM support code. For example, they probably can just always work
> with host CPUID (copied directly from KVM_GET_SUPPORTED_CPUID),
Doesn't at least one form of SEV have some masking of CPUID that's
visible to the guest; so perhaps we have to match the main vCPUs idea of
CPUIDs?
I don't think we do. Whatever startup code the on the migration helper can look at CPUID for purposes such as enabling AES instructions. It's a separate VM and one that will never be migrated (it's started separately on the source and destination).
> The migration helper can then also use its own address space, for example
> operating directly on ram_addr_t values with the helper running at very high
> virtual addresses. Migration code can use a RAMBlockNotifier to invoke
> KVM_SET_USER_MEMORY_REGION on the mirror VM (and never enable dirty memory
> logging on the mirror VM, too, which has better performance).
How does the use of a very high virtual address help ?
Sorry, read that as physical addresses, i.e. the code and any dedicated migration helper RAM (including communication structures) would be out of the range used by ram_addr_ts. (The virtual addresses instead can be chosen by the helper, since QEMU knows nothing about them).
Paolo
> With this implementation, the number of mirror vCPUs does not even have to
> be indicated on the command line. The VM and its vCPUs can simply be
> created when migration starts. In the SEV-ES case, the guest can even
> provide the VMSA that starts the migration helper.
>
> The disadvantage is that, as you point out, in the future some of the
> infrastructure you introduce might be useful for VMPL0 operation on SEV-SNP.
> My proposal above might require some code duplication. However, it might
> even be that VMPL0 operation works best with a model more similar to my
> sketch of the migration helper; it's really too early to say.
>
Dave
> Paolo
>
--
Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK