qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH v6 4/4] spapr: increase the size of the IRQ numb


From: Cédric Le Goater
Subject: Re: [Qemu-devel] [PATCH v6 4/4] spapr: increase the size of the IRQ number space
Date: Thu, 2 Aug 2018 17:59:55 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1

On 08/02/2018 04:47 PM, Greg Kurz wrote:
> On Mon, 30 Jul 2018 16:11:34 +0200
> Cédric Le Goater <address@hidden> wrote:
> 
>> The new layout using static IRQ number does not leave much space to
>> the dynamic MSI range, only 0x100 IRQ numbers. Increase the total
>> number of IRQS for newer machines and introduce a legacy XICS backend
>> for pre-3.1 machines to maintain compatibility.
>>
>> Signed-off-by: Cédric Le Goater <address@hidden>
>> ---
>>  include/hw/ppc/spapr_irq.h |  1 +
>>  hw/ppc/spapr.c             |  1 +
>>  hw/ppc/spapr_irq.c         | 12 +++++++++++-
>>  3 files changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/include/hw/ppc/spapr_irq.h b/include/hw/ppc/spapr_irq.h
>> index 0e98c4474bb2..626160ba475e 100644
>> --- a/include/hw/ppc/spapr_irq.h
>> +++ b/include/hw/ppc/spapr_irq.h
>> @@ -40,6 +40,7 @@ typedef struct sPAPRIrq {
>>  } sPAPRIrq;
>>  
>>  extern sPAPRIrq spapr_irq_xics;
>> +extern sPAPRIrq spapr_irq_xics_legacy;
>>  
>>  int spapr_irq_claim(sPAPRMachineState *spapr, int irq, bool lsi, Error 
>> **errp);
>>  void spapr_irq_free(sPAPRMachineState *spapr, int irq, int num);
>> diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
>> index d9f8cca49208..5ae62b0682d2 100644
>> --- a/hw/ppc/spapr.c
>> +++ b/hw/ppc/spapr.c
>> @@ -3947,6 +3947,7 @@ static void 
>> spapr_machine_3_0_class_options(MachineClass *mc)
>>      SET_MACHINE_COMPAT(mc, SPAPR_COMPAT_3_0);
>>  
>>      smc->legacy_irq_allocation = true;
>> +    smc->irq = &spapr_irq_xics_legacy;
>>  }
>>  
>>  DEFINE_SPAPR_MACHINE(3_0, "3.0", false);
>> diff --git a/hw/ppc/spapr_irq.c b/hw/ppc/spapr_irq.c
>> index 0cbb5dd39368..620c49b38455 100644
>> --- a/hw/ppc/spapr_irq.c
>> +++ b/hw/ppc/spapr_irq.c
>> @@ -196,7 +196,7 @@ static void spapr_irq_print_info_xics(sPAPRMachineState 
>> *spapr, Monitor *mon)
>>  }
>>  
>>  sPAPRIrq spapr_irq_xics = {
>> -    .nr_irqs     = XICS_IRQS_SPAPR,
>> +    .nr_irqs     = 0x1000,
> 
> IMHO using XICS_IRQS_SPAPR as the total number of MSIs for the whole
> machine was bogus, since the DT also advertises this same number of
> available MSIs per PHB:
> 
> *** hw/ppc/spapr_pci.c:
> spapr_populate_pci_dt[2126]
> 
>     _FDT(fdt_setprop_cell(fdt, bus_off, "ibm,pe-total-#msi", 
> XICS_IRQS_SPAPR));
> 
> Even if you bump the limit from 1024 to 4096, we still have a discrepancy
> between what we tell the guest and what the machine can actually do.

Yes. But that is another unrelated problem that this patch is not 
trying to solve. The patch is just about increasing the total 
number of IRQs to have some more MSIs to allocate at a machine 
level.

> I'm wondering if we should take into account the number of possible
> PHBs when initializing the bitmap allocator, ie, .nr_irqs should
> rather be SPAPR_MAX_PHBS * XICS_IRQS_SPAPR ?

XICS_IRQS_SPAPR is a machine level number and it is a little more 
complex than that. Something like  : 

        SPAPR_IRQ_MSI - XICS_IRQ_BASE + (max_phbs * max_msis_per_phb). 


C.

>>  
>>      .init        = spapr_irq_init_xics,
>>      .claim       = spapr_irq_claim_xics,
>> @@ -284,3 +284,13 @@ int spapr_irq_find(sPAPRMachineState *spapr, int num, 
>> bool align, Error **errp)
>>  
>>      return first + ics->offset;
>>  }
>> +
>> +sPAPRIrq spapr_irq_xics_legacy = {
>> +    .nr_irqs     = XICS_IRQS_SPAPR,
>> +
>> +    .init        = spapr_irq_init_xics,
>> +    .claim       = spapr_irq_claim_xics,
>> +    .free        = spapr_irq_free_xics,
>> +    .qirq        = spapr_qirq_xics,
>> +    .print_info  = spapr_irq_print_info_xics,
>> +};
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]