qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support


From: Ladi Prosek
Subject: Re: [Qemu-devel] [PATCH v4 0/8] KASLR kernel dump support
Date: Wed, 19 Jul 2017 08:06:58 +0200

On Tue, Jul 18, 2017 at 6:18 PM, Marc-André Lureau
<address@hidden> wrote:
> Hi
>
> On Tue, Jul 18, 2017 at 6:05 PM Ladi Prosek <address@hidden> wrote:
>>
>>
>> > I would like to hear from Ladi how he intended to use the device in
>> > the future, and if he would also prefer ACPI methods and what those
>> > methods should be.
>>
>> We should be able to drive pretty much anything from Windows. I wrote
>> a dummy driver for your earlier prototype just to be sure that ACPI
>> methods are fine, as I had not done or seen that before.
>>
>> There are constraints which may be unique to Windows, though. If the
>> dump-support data is kept in guest-allocated memory, the guest must be
>> able to revoke it (set / call the method with NULL PA?) because
>> Windows drivers must free everything on unload. Unlike Linux, I don't
>
>
> Well, the currently proposed vmcoreinfo device has a 4k memory region to put
> anything you want, Windows shouldn't be allowed to touch it directly (e820
> regions iirc)

Got it. And we would likely use it to put just addr/size there to make
updates atomic. I think we're supposed to update our dump-support data
on memory hot-plug for example.

>> think we can get a piece of memory at startup and keep it for as long
>> as the OS running. It would be flagged as a memory leak. But that
>> should be easy to have. Can't really think of anything else.
>
>
> The question is what kind of data you want to give to the host to help with
> debug. Is this something that can be as simple as addr/size, or you would
> rather have a 4k page to put various things there.

The size of the dump header as currently provided by Windows (that's
the dump-support data we want to pass to the host) is 4k. So I
wouldn't put it directly in the page anyway. That, plus the desire to
update the data at least somewhat atomically, would make me prefer a
simple addr/size pair I think.

>>
>>
>> >>>
>> >>> >> > instead of exporting a physical addess and storing address there.
>> >>> >> > This
>> >>> >> > way you can add more methods as you add functionality.
>> >>> >>
>> >>> >> I'm not saying this is a bad idea (especially because I don't fully
>> >>> >> understand your point), but I will say that I'm quite sad that you
>> >>> >> are
>> >>> >> sending Marc-André back to the drawing board after he posted v4 --
>> >>> >> also
>> >>> >> invalidating my review efforts. :/
>> >>> >>
>> >>> >> Laszlo
>> >>> >
>> >>> > You are right, I should have looked at this sooner. Early RFC
>> >>> > suggested writing into fw cfg directly. I couldn't find any
>> >>> > discussion around this - why was this abandoned?
>> >>>
>> >>> Violation (or rather abuse) of layers iirc
>> >>
>> >> Hmm.  I tried to find discussion about that and failed.  It is
>> >> available
>> >> through QEMU0002 in ACPI - would it be OK if guests went through that?
>> >
>> > I wouldn't mind, although merging functionality in a single device
>> > isn't what I would think of first. I guess Ladi would be happier with
>> > a single device. I suppose it shouldn't break drivers if we had
>> > memory, io, methods etc to the device?
>>
>> Yeah, it would be nice if this was part of pvpanic. Even something
>> really simple like:
>>
>>  /* The bit of supported pv event */
>>  #define PVPANIC_F_PANICKED      0
>> +#define PVPANIC_F_DUMP_INFO_SET      1
>
>
> QEMU0002 is fw_cfg

Ah, sorry, I got confused.

>>
>>
>> -     memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s, "pvpanic",
>> 1);
>> +    memory_region_init_io(&s->io, OBJECT(s), &pvpanic_ops, s,
>> "pvpanic", 32); // or whatever
>>
>> The guest writes to two or three registers: PA, size, type?, then sets
>> the PVPANIC_F_DUMP_INFO_SET bit.
>>
>> Although not sure if extending the I/O region is OK. And of course I
>> only need this on x86 :p
>>
>
> I would rather have a solution that works on various archs. It's a shame
> pvpanic was designed with x86 only in mind imho.

Understood. Thanks!

>
> --
> Marc-André Lureau



reply via email to

[Prev in Thread] Current Thread [Next in Thread]