qemu-ppc
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PULL 00/30] ppc-for-5.2 queue 20200904


From: Laurent Vivier
Subject: Re: [PULL 00/30] ppc-for-5.2 queue 20200904
Date: Mon, 7 Sep 2020 15:29:36 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.11.0

On 07/09/2020 04:38, David Gibson wrote:
> On Sun, Sep 06, 2020 at 04:20:10PM +0100, Peter Maydell wrote:
>> On Fri, 4 Sep 2020 at 04:47, David Gibson <david@gibson.dropbear.id.au> 
>> wrote:
>>>
>>> The following changes since commit 67a7bfe560a1bba59efab085cb3430f45176d382:
>>>
>>>   Merge remote-tracking branch 
>>> 'remotes/huth-gitlab/tags/pull-request-2020-09-03' into staging (2020-09-03 
>>> 16:58:25 +0100)
>>>
>>> are available in the Git repository at:
>>>
>>>   git://github.com/dgibson/qemu.git tags/ppc-for-5.2-20200904
>>>
>>> for you to fetch changes up to b172606ecf29a140073f7787251a9d70ecb53b6e:
>>>
>>>   spapr_numa: move NVLink2 associativity handling to spapr_numa.c 
>>> (2020-09-04 13:40:09 +1000)
>>>
>>> ----------------------------------------------------------------
>>> ppc patch queue 2020-09-04
>>>
>>> Next pull request for qemu-5.2.  The biggest thing here is the
>>> generalization of ARM's start-powered-off machine property to all
>>> targets.  This can fix a number of odd little edge cases where KVM
>>> could run vcpus before they were properly initialized.  This does
>>> include changes to a number of files that aren't normally in my
>>> purview.  There are suitable Acked-by lines and Peter requested this
>>> come in via my tree, since the most pressing requirement for it is in
>>> pseries machines with the POWER secure virtual machine facility.
>>>
>>> In addition we have:
>>>  * The start of Daniel Barboza's rework and clean up of pseries
>>>    machine NUMA handling
>>>  * Correction to behaviour of the nvdimm= generic machine property on
>>>    pseries
>>>  * An optimization to the allocation of XIVE interrupts on KVM
>>>  * Some fixes for confused behaviour with kernel_irqchip when both
>>>    XICS and XIVE are in play
>>>  * Add HIOMAP comamnd to pnv flash
>>>  * Properly advertise the fact that spapr_vscsi doesn't handle
>>>    hotplugged disks
>>>  * Some assorted minor enhancements
>>
>> Hi -- this fails to build for Windows:
>>
>> ../../hw/ppc/spapr_numa.c: In function 'spapr_numa_fixup_cpu_dt':
>> ../../hw/ppc/spapr_numa.c:77:5: error: unknown type name 'uint'
>>      uint vcpu_assoc_size = NUMA_ASSOC_SIZE + 1;
>>      ^
> 
> Huh, that's weird.  My testing run was less thorough than I'd usually
> do, because so many tests were broken on the master branch, but I was
> pretty sure I did do successful mingw builds.
> 
>> That should probably be using one of the standard C types.
> 
> Done.
> 
>> The 'check-tcg' tests for the linux-user static build also
>> failed on an s390x test:
>>
>>   CHECK   debian-s390x-cross
>>   BUILD   s390x-linux-user guest-tests with docker qemu/debian-s390x-cross
>>   RUN     tests for s390x
>>   TEST    threadcount on s390x
>> Unhandled trap: 0x10003

This is EXCP_HALTED (include/exec/cpu-all.h)

The message error comes from cpu_loop() in linux-user/s390x/cpu_loop.c.

The trap can only come from accel/tcg/cpu-exec.c

    679 int cpu_exec(CPUState *cpu)
    680 {
...
    688     if (cpu_handle_halt(cpu)) {
    689         return EXCP_HALTED;
    690     }

and

    428 static inline bool cpu_handle_halt(CPUState *cpu)
    429 {
    430     if (cpu->halted) {
...
    441         if (!cpu_has_work(cpu)) {
    442             return true;
    443         }

and

     58 static bool s390_cpu_has_work(CPUState *cs)
     59 {
     60     S390CPU *cpu = S390_CPU(cs);
     61
     62     /* STOPPED cpus can never wake up */
     63     if (s390_cpu_get_state(cpu) != S390_CPU_STATE_LOAD &&
     64         s390_cpu_get_state(cpu) != S390_CPU_STATE_OPERATING) {
     65         return false;
     66     }
     67
     68     if (!(cs->interrupt_request & CPU_INTERRUPT_HARD)) {
     69         return false;
     70     }
     71
     72     return s390_cpu_has_int(cpu);
     73 }

and in target/s390x/cpu.h:

    772 #ifndef CONFIG_USER_ONLY
    773 unsigned int s390_cpu_set_state(uint8_t cpu_state, S390CPU *cpu);
    774 #else
    775 static inline unsigned int s390_cpu_set_state(uint8_t cpu_state,
S390CPU *cpu)
    776 {
    777     return 0;
    778 }
    779 #endif /* CONFIG_USER_ONLY */
    780 static inline uint8_t s390_cpu_get_state(S390CPU *cpu)
    781 {
    782     return cpu->env.cpu_state;
    783 }

As cpu_state is never set, perhaps in case of linux-user it should
always return S390_CPU_STATE_OPERATING?

Something like:

diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
index 035427521cec..8a8628fcdcc6 100644
--- a/target/s390x/cpu.h
+++ b/target/s390x/cpu.h
@@ -771,16 +771,20 @@ int s390_assign_subch_ioeventfd(EventNotifier
*notifier, uint32_t sch_id,
                                 int vq, bool assign);
 #ifndef CONFIG_USER_ONLY
 unsigned int s390_cpu_set_state(uint8_t cpu_state, S390CPU *cpu);
+static inline uint8_t s390_cpu_get_state(S390CPU *cpu)
+{
+    return cpu->env.cpu_state;
+}
 #else
 static inline unsigned int s390_cpu_set_state(uint8_t cpu_state,
S390CPU *cpu)
 {
     return 0;
 }
-#endif /* CONFIG_USER_ONLY */
 static inline uint8_t s390_cpu_get_state(S390CPU *cpu)
 {
-    return cpu->env.cpu_state;
+    return S390_CPU_STATE_OPERATING;
 }
+#endif /* CONFIG_USER_ONLY */

Thanks,
Laurent

>> PSW=mask 0000000180000000 addr 00000000010004f0 cc 00
>> R00=0000000000000000 R01=0000000000000000 R02=0000000000000000
>> R03=0000000000000000
>> R04=0000000000000000 R05=0000000000000000 R06=0000000000000000
>> R07=0000000000000000
>> R08=0000000000000000 R09=0000000000000000 R10=0000000000000000
>> R11=0000000000000000
>> R12=0000000000000000 R13=0000000000000000 R14=0000000000000000
>> R15=00000040008006c0
>>
>> ../Makefile.target:153: recipe for target 'run-threadcount' failed
>> make[2]: *** [run-threadcount] Error 1
> 
> Bother.  I did see that failure on Travis, but assumed it was a false
> positive because there were so many failures on master there.
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]