qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/3] i386/kvm: initialize struct at full before


From: Christian Borntraeger
Subject: Re: [Qemu-devel] [PATCH 3/3] i386/kvm: initialize struct at full before ioctl call
Date: Wed, 31 Jul 2019 14:28:17 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2


On 31.07.19 14:04, Andrey Shinkevich wrote:
> On 31/07/2019 10:24, Christian Borntraeger wrote:
>>
>>
>> On 30.07.19 21:20, Paolo Bonzini wrote:
>>> On 30/07/19 18:01, Andrey Shinkevich wrote:
>>>> Not the whole structure is initialized before passing it to the KVM.
>>>> Reduce the number of Valgrind reports.
>>>>
>>>> Signed-off-by: Andrey Shinkevich <address@hidden>
>>>
>>> Christian, is this the right fix?  It's not expensive so it wouldn't be
>>> an issue, just checking if there's any better alternative.
>>
>> I think all of these variants are valid with pros and cons
>> 1. teach valgrind about this:
>> Add to coregrind/m_syswrap/syswrap-linux.c (and the relevant header files)
>> knowledge about which parts are actually touched.
>> 2. use designated initializers
>> 3. use memset
>> 3. use a valgrind callback VG_USERREQ__MAKE_MEM_DEFINED to tell that this 
>> memory is defined
>>
> 
> Thank you all very much for taking part in the discussion.
> Also, one may use the Valgrind technology to suppress the unwanted 
> reports by adding the Valgrind specific format file valgrind.supp to the 
> QEMU project. The file content is extendable for future needs.
> All the cases we like to suppress will be recounted in that file.
> A case looks like the stack fragments. For instance, from QEMU block:
> 
> {
>     hw/block/hd-geometry.c
>     Memcheck:Cond
>     fun:guess_disk_lchs
>     fun:hd_geometry_guess
>     fun:blkconf_geometry
>     ...
>     fun:device_set_realized
>     fun:property_set_bool
>     fun:object_property_set
>     fun:object_property_set_qobject
>     fun:object_property_set_bool
> }
> 
> The number of suppressed cases are reported by the Valgrind with every 
> run: "ERROR SUMMARY: 5 errors from 3 contexts (suppressed: 0 from 0)"
> 
> Andrey

Yes, indeed that would be another variant. How performance critical are
the fixed locations? That might have an impact on what is the best solution.
>From a cleanliness approach doing 1 (adding the ioctl definition to valgrind)
is certainly the most beautiful way. I did that in the past, look for example at

https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=c2baee9b7bf043702c130de0771a4df439fcf403
or 
https://sourceware.org/git/?p=valgrind.git;a=commitdiff;h=00a31dd3d1e7101b331c2c83fca6c666ba35d910

for examples. 


> 
>>>
>>> Paolo
>>>
>>>> ---
>>>>   target/i386/kvm.c | 3 +++
>>>>   1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/target/i386/kvm.c b/target/i386/kvm.c
>>>> index dbbb137..ed57e31 100644
>>>> --- a/target/i386/kvm.c
>>>> +++ b/target/i386/kvm.c
>>>> @@ -190,6 +190,7 @@ static int kvm_get_tsc(CPUState *cs)
>>>>           return 0;
>>>>       }
>>>>   
>>>> +    memset(&msr_data, 0, sizeof(msr_data));
>>>>       msr_data.info.nmsrs = 1;
>>>>       msr_data.entries[0].index = MSR_IA32_TSC;
>>>>       env->tsc_valid = !runstate_is_running();
>>>> @@ -1706,6 +1707,7 @@ int kvm_arch_init_vcpu(CPUState *cs)
>>>>   
>>>>       if (has_xsave) {
>>>>           env->xsave_buf = qemu_memalign(4096, sizeof(struct kvm_xsave));
>>>> +        memset(env->xsave_buf, 0, sizeof(struct kvm_xsave));
>>>>       }
>>>>   
>>>>       max_nested_state_len = kvm_max_nested_state_length();
>>>> @@ -3477,6 +3479,7 @@ static int kvm_put_debugregs(X86CPU *cpu)
>>>>           return 0;
>>>>       }
>>>>   
>>>> +    memset(&dbgregs, 0, sizeof(dbgregs));
>>>>       for (i = 0; i < 4; i++) {
>>>>           dbgregs.db[i] = env->dr[i];
>>>>       }
>>>> -- 
>>>> 1.8.3.1
>>>>
>>>
>>>
>>
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]