qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 0/2] block: Use 'read-zeroes=true' mode by default with 'n


From: Max Reitz
Subject: Re: [PATCH v2 0/2] block: Use 'read-zeroes=true' mode by default with 'null-co' driver
Date: Tue, 23 Feb 2021 09:44:51 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.7.0

On 22.02.21 19:15, Daniel P. Berrangé wrote:
On Fri, Feb 19, 2021 at 03:09:43PM +0100, Philippe Mathieu-Daudé wrote:
On 2/19/21 12:07 PM, Max Reitz wrote:
On 13.02.21 22:54, Fam Zheng wrote:
On 2021-02-11 15:26, Philippe Mathieu-Daudé wrote:
The null-co driver doesn't zeroize buffer in its default config,
because it is designed for testing and tests want to run fast.
However this confuses security researchers (access to uninit
buffers).

I'm a little surprised.

Is changing default the only way to fix this? I'm not opposed to
changing the default but I'm not convinced this is the easiest way.
block/nvme.c also doesn't touch the memory, but defers to the device
DMA, why doesn't that confuse the security checker?

Generally speaking, there is a balance between security and performance.
We try to provide both, but when we can't, my understanding is security
is more important.

Customers expect a secure product. If they prefer performance and
at the price of security, it is also possible by enabling an option
that is not the default.

I'm not sure why you mention block/nvme here. I have the understanding
the null-co driver is only useful for testing. Are there production
cases where null-co is used?

Do we have any real world figures for the performance of null-co
with & without  zero'ing ?  Before worrying about a tradeoff of
security vs performance, it'd be good to know if there is actually
a real world performance problem in the first place. Personally I'd
go for zero'ing by defualt unless the performance hit was really
bad.

AFAIU, null-co is only used for testing, be it to just create some block nodes in the iotests, or perhaps for performance testing where you want to get the minimal roundtrip time through the block layer. So there is no "real world performance problem", because there is no real world use of null-co or null-aio. At least there shouldn’t be.

That begs the question of whether read-zeroes=off even makes sense, and I think it absolutely does.

In cases where we have a test that just wants a simple block node that doesn’t use disk space, the memset() can’t be noticeable. But it’s just a test, so do we even need the memset()? Strictly speaking, perhaps not, but if someone is to run it via Valgrind or something, they may get false positives, so just doing the memset() is the right thing to do.

For performance tests, it must be possible to set read-zeroes=off, because even though “that memset() isn’t noticeable in a functional test”, in a hard-core performance test, it will be.

So we need a switch. It should default to memset(), because (1) making tools like Valgrind happy seems like a reasonable objective to me, and (2) in the majority of cases, the memset() cannot have a noticeable impact.

Max




reply via email to

[Prev in Thread] Current Thread [Next in Thread]