qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH 3/3] target-xtensa: xtfpga: support noMMU cores


From: Peter Crosthwaite
Subject: Re: [Qemu-devel] [PATCH 3/3] target-xtensa: xtfpga: support noMMU cores
Date: Sun, 27 Sep 2015 11:43:36 -0700

On Sun, Sep 27, 2015 at 11:13 AM, Max Filippov <address@hidden> wrote:
> On Sun, Sep 27, 2015 at 8:38 PM, Peter Crosthwaite
> <address@hidden> wrote:
>> On Sun, Sep 27, 2015 at 10:16 AM, Max Filippov <address@hidden> wrote:
>>> Cores with and without MMU have system RAM and ROM at different locations.
>>> Also with noMMU cores system IO region is accessible through two physical
>>> address ranges.
>>>
>>> Signed-off-by: Max Filippov <address@hidden>
>>> ---
>>>  hw/xtensa/xtfpga.c | 49 +++++++++++++++++++++++++++++++++++++++++--------
>>>  1 file changed, 41 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/hw/xtensa/xtfpga.c b/hw/xtensa/xtfpga.c
>>> index d4b9afb..b53f40d 100644
>>> --- a/hw/xtensa/xtfpga.c
>>> +++ b/hw/xtensa/xtfpga.c
>>> @@ -199,7 +199,29 @@ static void lx_init(const LxBoardDesc *board, 
>>> MachineState *machine)
>>>      const char *kernel_cmdline = qemu_opt_get(machine_opts, "append");
>>>      const char *dtb_filename = qemu_opt_get(machine_opts, "dtb");
>>>      const char *initrd_filename = qemu_opt_get(machine_opts, "initrd");
>>> +    const unsigned system_io_size = 224 * 1024 * 1024;
>>> +    bool mmu;
>>
>> You are indexing into an array of configs so it's really an int (or
>> better, an enum).
>
> Ok.
>
>>>      int n;
>>
>> Blank line.
>
> Why?
>

Just a readability suggestion. You have a collection of short defs
that then runs straight into a lengthy self-contained table.

>>> +    static const struct {
>>> +        hwaddr ram;
>>> +        hwaddr rom;
>>> +        hwaddr io[2];
>>> +    } base[2] = {
>>> +        {
>>> +            .ram = 0x60000000,
>>> +            .rom = 0x50000000,
>>> +            .io = {
>>> +                0x70000000,
>>> +                0x90000000,
>>> +            },
>>> +        }, {
>>> +            .ram = 0,
>>> +            .rom = 0xfe000000,
>>> +            .io = {
>>> +                0xf0000000,
>>> +            },
>>> +        }
>>> +    };
>>>
>>>      if (!cpu_model) {
>>>          cpu_model = XTENSA_DEFAULT_CPU_MODEL;
>>> @@ -222,16 +244,24 @@ static void lx_init(const LxBoardDesc *board, 
>>> MachineState *machine)
>>>          cpu_reset(CPU(cpu));
>>>      }
>>>
>>> +    mmu = xtensa_option_enabled(env->config, XTENSA_OPTION_MMU);
>>
>> This looks backwards, the board should be in charge of itself and the
>> CPU config, rather than spying on the CPU setup to rewire the board.
>
> Well, it's an FPGA board and all connections are a part of bitstream.
> It's generated that way, I'm just following the specification here.
>

OK, but the xtensa-CPU is not the bitstream, this board is. What
exactly is the user interface for switching between MMU and no-MMU?

With the major changes of address layout, the no-MMU variation should
be a set of new boards or a machine level parameterisation (i.e. QOM
property of the machine). It needs to be user-visible as different on
the machine level.

Regards,
Peter

>>>      ram = g_malloc(sizeof(*ram));
>>>      memory_region_init_ram(ram, NULL, "lx60.dram", machine->ram_size,
>>>                             &error_fatal);
>>>      vmstate_register_ram_global(ram);
>>> -    memory_region_add_subregion(system_memory, 0, ram);
>>> +    memory_region_add_subregion(system_memory, base[mmu].ram, ram);
>>>
>>>      system_io = g_malloc(sizeof(*system_io));
>>>      memory_region_init_io(system_io, NULL, &lx60_io_ops, NULL, "lx60.io",
>>> -                          224 * 1024 * 1024);
>>> -    memory_region_add_subregion(system_memory, 0xf0000000, system_io);
>>> +                          system_io_size);
>>> +    memory_region_add_subregion(system_memory, base[mmu].io[0], system_io);
>>> +    if (!mmu) {
>>
>> The boolean switch for whether the alias exists could go in the
>> struct. That makes it more robust to add yet more configs in the
>> future rather than iffery on the config index.
>
> Ok.
>
> --
> Thanks.
> -- Max



reply via email to

[Prev in Thread] Current Thread [Next in Thread]