qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Qemu-devel] Re: [PATCH] pass info about hpets to seabios.


From: Gleb Natapov
Subject: [Qemu-devel] Re: [PATCH] pass info about hpets to seabios.
Date: Sun, 13 Jun 2010 20:19:15 +0300

On Sun, Jun 13, 2010 at 06:56:37PM +0200, Jan Kiszka wrote:
> Gleb Natapov wrote:
> > Currently HPET ACPI table is created regardless of whether qemu actually
> > created hpet device. This may confuse some guests that don't check that
> > hpet is functional before using it. Solve this by passing info about
> > hpets in qemu to seabios via fw config interface. Additional benefit is
> > that seabios no longer uses hard coded hpet configuration. Proposed
> > interface supports up to 256 hpets. This is the number defined by hpet 
> > spec.
> 
> Nice, this lays the ground for adding hpets via -device.
> 
> (But I think I read there can only be 8 hpets with a total sum of 256
> timers.)
> 
Ah, correct. I thought to myself 256 hpets should be to much :)

> > 
> > Signed-off-by: Gleb Natapov <address@hidden>
> > diff --git a/hw/hpet.c b/hw/hpet.c
> > index 93fc399..f2a4514 100644
> > --- a/hw/hpet.c
> > +++ b/hw/hpet.c
> > @@ -73,6 +73,8 @@ typedef struct HPETState {
> >      uint64_t hpet_counter;      /* main counter */
> >  } HPETState;
> >  
> > +struct hpet_fw_config hpet_cfg = {.valid = 1};
> > +
> >  static uint32_t hpet_in_legacy_mode(HPETState *s)
> >  {
> >      return s->config & HPET_CFG_LEGACY;
> > @@ -661,6 +663,9 @@ static void hpet_reset(DeviceState *d)
> >           */
> >          hpet_pit_enable();
> >      }
> > +    hpet_cfg.count = 1;
> > +    hpet_cfg.hpet.event_timer_block_id = (uint32_t)s->capability;
> 
> The number of timers, thus the content of capability can change on
> vmload. So you need to update hpet_cfg there as well.
> 
How it can change? User is required to run the same command line on src
and dst, no?

> And I think we can move the capability setup into init. But this is not
> directly related to this patch, would just avoid adding this hunk to
> hpet_reset.
I actually did that initially and tried to init hpet_cfg there too, but
then noticed that mmio[0].addr below is not initialized at init time yet.

> 
> > +    hpet_cfg.hpet.address = sysbus_from_qdev(d)->mmio[0].addr;
> >      count = 1;
> >  }
> >  
> > diff --git a/hw/hpet_emul.h b/hw/hpet_emul.h
> > index d7bc102..5cf5463 100644
> > --- a/hw/hpet_emul.h
> > +++ b/hw/hpet_emul.h
> > @@ -53,4 +53,20 @@
> >  #define HPET_TN_INT_ROUTE_CAP_SHIFT 32
> >  #define HPET_TN_CFG_BITS_READONLY_OR_RESERVED 0xffff80b1U
> >  
> > +struct hpet_fw_entry
> > +{
> > +    uint32_t event_timer_block_id;
> > +    uint64_t address;
> > +    uint16_t min_tick;
> > +    uint8_t page_prot;
> > +} __attribute__ ((packed));
> > +
> > +struct hpet_fw_config
> > +{
> > +    uint8_t valid;
> > +    uint8_t count;
> > +    struct hpet_fw_entry hpet;
> 
> Why not already struct hpet_fw_entry hpet[8]? Once the bios bits are
> merge, we can quickly remove the single hpet limitation on qemu side.
> 
Number 256 somehow stuck in my head. 8 hpets is Ok to do from the start.

> > +} __attribute__ ((packed));
> > +
> > +extern struct hpet_fw_config hpet_cfg;
> >  #endif
> > diff --git a/hw/pc.c b/hw/pc.c
> > index 1491129..d14d657 100644
> > --- a/hw/pc.c
> > +++ b/hw/pc.c
> > @@ -61,6 +61,7 @@
> >  #define FW_CFG_SMBIOS_ENTRIES (FW_CFG_ARCH_LOCAL + 1)
> >  #define FW_CFG_IRQ0_OVERRIDE (FW_CFG_ARCH_LOCAL + 2)
> >  #define FW_CFG_E820_TABLE (FW_CFG_ARCH_LOCAL + 3)
> > +#define FW_CFG_HPET (FW_CFG_ARCH_LOCAL + 4)
> >  
> >  #define E820_NR_ENTRIES            16
> >  
> > @@ -484,6 +485,8 @@ static void *bochs_bios_init(void)
> >      fw_cfg_add_bytes(fw_cfg, FW_CFG_E820_TABLE, (uint8_t *)&e820_table,
> >                       sizeof(struct e820_table));
> >  
> > +    fw_cfg_add_bytes(fw_cfg, FW_CFG_HPET, (uint8_t *)&hpet_cfg,
> > +                     sizeof(struct hpet_fw_config));
> >      /* allocate memory for the NUMA channel: one (64bit) word for the 
> > number
> >       * of nodes, one word for each VCPU->node and one word for each node to
> >       * hold the amount of memory.
> > --
> >                     Gleb.
> > 
> > 
> 
> Jan
> 



--
                        Gleb.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]