qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH for-5.0 v5 14/23] ppc/spapr: Implement the XiveFabric interfa


From: Cédric Le Goater
Subject: Re: [PATCH for-5.0 v5 14/23] ppc/spapr: Implement the XiveFabric interface
Date: Thu, 21 Nov 2019 07:56:32 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.1.1

On 20/11/2019 18:53, Greg Kurz wrote:
> On Fri, 15 Nov 2019 17:24:27 +0100
> Cédric Le Goater <address@hidden> wrote:
> 
>> The CAM line matching sequence in the pseries machine does not change
>> much apart from the use of the new QOM interfaces. There is an extra
>> indirection because of the sPAPR IRQ backend of the machine. Only the
>> XIVE backend implements the new 'match_nvt' handler.
>>
> 
> The changelog needs an update since you dropped the indirection you had
> in v4.

Indeed.

> 
>> Signed-off-by: Cédric Le Goater <address@hidden>
>> ---
>>  hw/ppc/spapr.c | 36 ++++++++++++++++++++++++++++++++++++
>>  1 file changed, 36 insertions(+)
>>
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index 94f9d27096af..a8f5850f65bb 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -4270,6 +4270,39 @@ static void 
>> spapr_pic_print_info(InterruptStatsProvider *obj,
>>                     kvm_irqchip_in_kernel() ? "in-kernel" : "emulated");
>>  }
>>  
>> +static int spapr_xive_match_nvt(XiveFabric *xfb, uint8_t format,
>> +                                uint8_t nvt_blk, uint32_t nvt_idx,
>> +                                bool cam_ignore, uint8_t priority,
>> +                                uint32_t logic_serv, XiveTCTXMatch *match)
>> +{
>> +    SpaprMachineState *spapr = SPAPR_MACHINE(xfb);
>> +    XivePresenter *xptr = XIVE_PRESENTER(spapr->xive);
>> +    XivePresenterClass *xpc = XIVE_PRESENTER_GET_CLASS(xptr);
>> +    int count;
>> +
> 
> As suggested by David, you should probably assert() that XIVE is in use
> for extra paran^Wsafety.

I don't see the need. The stack call is clear enough IMO. It can only be 
reached from the XiveRouter.

Thanks,

C. 

> With these fixed,
> 
> Reviewed-by: Greg Kurz <address@hidden>
> 
>> +    count = xpc->match_nvt(xptr, format, nvt_blk, nvt_idx, cam_ignore,
>> +                           priority, logic_serv, match);
>> +    if (count < 0) {
>> +        return count;
>> +    }
>> +
>> +    /*
>> +     * When we implement the save and restore of the thread interrupt
>> +     * contexts in the enter/exit CPU handlers of the machine and the
>> +     * escalations in QEMU, we should be able to handle non dispatched
>> +     * vCPUs.
>> +     *
>> +     * Until this is done, the sPAPR machine should find at least one
>> +     * matching context always.
>> +     */
>> +    if (count == 0) {
>> +        qemu_log_mask(LOG_GUEST_ERROR, "XIVE: NVT %x/%x is not 
>> dispatched\n",
>> +                      nvt_blk, nvt_idx);
>> +    }
>> +
>> +    return count;
>> +}
>> +
>>  int spapr_get_vcpu_id(PowerPCCPU *cpu)
>>  {
>>      return cpu->vcpu_id;
>> @@ -4366,6 +4399,7 @@ static void spapr_machine_class_init(ObjectClass *oc, 
>> void *data)
>>      PPCVirtualHypervisorClass *vhc = PPC_VIRTUAL_HYPERVISOR_CLASS(oc);
>>      XICSFabricClass *xic = XICS_FABRIC_CLASS(oc);
>>      InterruptStatsProviderClass *ispc = INTERRUPT_STATS_PROVIDER_CLASS(oc);
>> +    XiveFabricClass *xfc = XIVE_FABRIC_CLASS(oc);
>>  
>>      mc->desc = "pSeries Logical Partition (PAPR compliant)";
>>      mc->ignore_boot_device_suffixes = true;
>> @@ -4442,6 +4476,7 @@ static void spapr_machine_class_init(ObjectClass *oc, 
>> void *data)
>>      smc->linux_pci_probe = true;
>>      smc->smp_threads_vsmt = true;
>>      smc->nr_xirqs = SPAPR_NR_XIRQS;
>> +    xfc->match_nvt = spapr_xive_match_nvt;
>>  }
>>  
>>  static const TypeInfo spapr_machine_info = {
>> @@ -4460,6 +4495,7 @@ static const TypeInfo spapr_machine_info = {
>>          { TYPE_PPC_VIRTUAL_HYPERVISOR },
>>          { TYPE_XICS_FABRIC },
>>          { TYPE_INTERRUPT_STATS_PROVIDER },
>> +        { TYPE_XIVE_FABRIC },
>>          { }
>>      },
>>  };
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]