|
From: | David Hildenbrand |
Subject: | Re: [PATCH v1 1/3] softmmu/physmem: fallback to opening guest RAM file as readonly in a MAP_PRIVATE mapping |
Date: | Thu, 17 Aug 2023 17:43:43 +0200 |
User-agent: | Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Thunderbird/102.13.0 |
On 17.08.23 17:31, Peter Xu wrote:
On Thu, Aug 17, 2023 at 05:15:52PM +0200, David Hildenbrand wrote:I don't know how important that requirement was (that commit was a request from Kata Containers).Let me take a look if Kata passes "share=on,readonly=on" or "share=off,readonly=off".The question is whether it's good enough if we change the semantics as long as we guarantee the original purposes of when introducing those flags would be enough (nvdimm, kata, etc.), as anything introduced in qemu can potentially be used elsewhere too.
Right. And we have to keep the R/O NVDIMM use case working as is apparently.
David, could you share your concern on simply "having a new flag, while keeping all existing flags unchanged on behavior"? You mentioned it's not wanted, but I didn't yet see the reason behind.
I'm really having a hard time to come up with something reasonable to configure this. And apparently, we only want to configure "share=off,readonly=on".
The best I was imagining was "readonly=file-only" but I'm also not too happy about that. Doesn't make any sense for "share=on".
So if we could just let the memory backend do something reasonable and have the single existing consumer (R/O NVDIMM) handle the changes case explicitly internally, that turns up much cleaner.
IMHO, the user shouldn't have to worry about "how is it mmaped". "share" and "readonly" express the memory semantics and the file semantics.
A R/O DIMM on the other hand (unarmed=on), knows that it is R/O, and the user configured exactly that. So maybe it can simply expose it to the system as readonly by marking the memory region container as being a ROM.
I have not given up yet, but this case is starting to be annoying. -- Cheers, David / dhildenb
[Prev in Thread] | Current Thread | [Next in Thread] |