qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC] SSI QOMification


From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [RFC] SSI QOMification
Date: Thu, 21 Jun 2012 10:21:16 +1000

Ping!

Id really appreciate some input on this issue (rather than going ahead
and doing it to discover that someone disagrees with the approach).

Regards,
Peter

On Mon, Jun 18, 2012 at 3:13 PM, Peter Crosthwaite
<address@hidden> wrote:
> HI All,
>
> Have another one of these long RFCs for you all RE some QOM
> refactoring. This time around SSI/SPI and supporting multiple devices
> connected to one chip select.
>
> I have a pending series that is in Limbo, mainly this patch is problematic:
>
> http://lists.gnu.org/archive/html/qemu-devel/2012-06/msg00227.html
>
> Im trying to get some nice clean multi-device SPI device support going
> to match the Xilinx XPI controller, but other machine models (mainly
> stellaris) have a more ah-hoc approach to SSI around point-to-point
> links.
>
> Lets start this again by describing the real hardware. We have two
> machines to discuss this time, Stellaris, and Xilinx. I have attached
> an image that sums up the Stellaris architecture
> (stellaris_real_hw.jpg), Paul correct me if this is inaccurate. For
> those who like words instead, it can be summed up as:
>
> - Two SSI devices (OLED + SD) attached to single controller, PL022
> - Tx and Rx lines shared between both devices
> - One chip select (CS) line (for OLED) comes from the PL022
> - The other CS (for SD) comes from a GPIO
>
> Problem is, no SSI device in QEMU emulates CS behaviour, so there is
> this virtual mux device in the stellaris machine model that emulates
> the CS behaviour of each device (stellaris_emulated.jpg). I have
> attached an image that sums it up. For those who prefer code (from
> hw/stellaris.c):
>
> typedef struct {
>    SSISlave ssidev;
>    qemu_irq irq;
>    int current_dev;
>    SSIBus *bus[2];
> } stellaris_ssi_bus_state;
>
> static void stellaris_ssi_bus_select(void *opaque, int irq, int level)
> {
>    stellaris_ssi_bus_state *s = (stellaris_ssi_bus_state *)opaque;
>
>    s->current_dev = level;
> }
>
> static uint32_t stellaris_ssi_bus_transfer(SSISlave *dev, uint32_t val)
> {
>    stellaris_ssi_bus_state *s = FROM_SSI_SLAVE(stellaris_ssi_bus_state, dev);
>
>    return ssi_transfer(s->bus[s->current_dev], val);
> }
>
> static const VMStateDescription vmstate_stellaris_ssi_bus = {
>    .name = "stellaris_ssi_bus",
>    .version_id = 1,
> ...
> };
>
> The thing is, there is no actual hardware for this mux. its just three
> copper traces on a board.
>
> Moving onto Xilinx, the real hardware is summed up in the attached
> image (xilinx_real_hw.jpg). For those that prefer words:
>
> - N SSI devices attached to a single controller, xps_spi
> - Tx and Rx lines shared between all devices
> - N chip selects come from the xps_spi controller for the N devices
>
> So, here are the issues:
>
> A: We need to emulate CS behavior (without machine models having to
> create these strange glue devices).
> B: We need to emulate multiple devices attached to one SPI controller
> (again without glue devices).
>
> Heres the proposal:
>
> -SSI bus is changed to multiple device. You can attach as many devices
> as you want to a single SSI bus. When the master initiates a transfer,
> all devices have their transfer() function called. The results are
> logically or'ed together from each device (youll see how this works
> out if you keep reading). There is no CS behaviour emulated in the bus
> itself (which is contrary to my patch - its my new proposal).
>
> -SSI_Slave becomes an abstract device. which defines an abstract
> function do_transfer(). The SSI Slave devices we have today inherirt
> from this and the existing transfer() function for each SPI device
> becomes this do_transfer function. The abstract class (SSISlave)
> defines a single GPIO input for the CS line. The transfer() function
> (which is called by the bus or controller) is implemented on the
> SSISlave abstract layer, and will call and return do_transfer() if the
> CS GPIO is set, otherwise is returns 0. The big advantage is there is
> no or only trivial changes to the existing SSI devices. This all
> happens at the abstract layer and a little bit of machine model
> improvement - i.e. this means steallaris' virtual Mux will go away
> completly (as the CS GPIOs are attached directly to the SSISlave
> device).
>
> My Xilinx spi controller will have N GPIO outs that just connect to
> the N SPI devices on the machine model layer.
>
> So here are the nitty-gritty details around the pending QOM stuff:
>
> Anthony is currently overhauling QBus, and im guessing the SSI bus is
> part of that? qom-next stable enough in this area to look at or not?
> Also Anthony mentioned recently some GPIO refactoring stuff - Are
> GPIOs on multiple levels of abstraction supported yet? IE if I have a
> SPI GPIO device, I need a GPIO on the SSISlave layer but also GPIOs on
> my (concrete) device layer. I know this currently doesnt work cos of
> qdev but thats going away right?
>
> Regards,
> Peter



reply via email to

[Prev in Thread] Current Thread [Next in Thread]