[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2 0/3] exclude hyperv synic sections from vhost
From: |
Dr. David Alan Gilbert |
Subject: |
Re: [PATCH v2 0/3] exclude hyperv synic sections from vhost |
Date: |
Mon, 13 Jan 2020 18:58:30 +0000 |
User-agent: |
Mutt/1.13.0 (2019-11-30) |
* Paolo Bonzini (address@hidden) wrote:
> On 13/01/20 18:36, Dr. David Alan Gilbert (git) wrote:
> >
> > Hyperv's synic (that we emulate) is a feature that allows the guest
> > to place some magic (4k) pages of RAM anywhere it likes in GPA.
> > This confuses vhost's RAM section merging when these pages
> > land over the top of hugepages.
>
> Can you explain what is the confusion like? The memory API should just
> tell vhost to treat it as three sections (RAM before synIC, synIC
> region, RAM after synIC) and it's not clear to me why postcopy breaks
> either.
There's two separate problems:
a) For vhost-user there's a limited size for the 'mem table' message
containing the number of regions to send; that's small - so an
attempt is made to coalesce regions that all refer to the same
underlying RAMblock. If things split the region up you use more
slots. (it's why the coalescing code was originally there.)
b) With postcopy + vhost-user life gets more complex because of
userfault. We require that the vhost-user client can mmap the
memory areas on host page granularity (i.e. hugepage granularity
if it's hugepage backed). To do that we tweak the aggregation code
to align the blocks to page size boundaries and then perform
aggregation - as long as nothing else important gets in the way
we're OK.
In this case the guest is programming synic to land at the 512k
boundary (in 16 separate 4k pages next to each other). So we end
up with 0-512k (stretched to 0..2MB alignment) - then we see
synic (512k-+4k ...) then we see RAM at 640k - and when we try
to align that we error because we realise the synic mapping is in
the way and we can't merge the 640k ram chunk with the base 0-512k
aligned chunk.
Note the reported failure here is kernel vhost, not vhost-user;
so actually it probably doesn't need the alignment, and vhost-user would
probably filter out the synic mappings anyway due to the fact they've
not got an fd ( vhost_user_mem_section_filter ). But the alignment
code always runs.
Dave
> Paolo
>
> > Since they're not normal RAM, and they shouldn't have vhost DMAing
> > into them, exclude them from the vhost set.
>
--
Dr. David Alan Gilbert / address@hidden / Manchester, UK