qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH] [M25P80] Make sure not to overrun the internal


From: mar.krzeminski
Subject: Re: [Qemu-devel] [PATCH] [M25P80] Make sure not to overrun the internal data buffer.
Date: Fri, 30 Dec 2016 19:09:27 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1



W dniu 30.12.2016 o 18:14, Jean-Christophe DUBOIS pisze:
Le 30/12/2016 à 16:39, mar.krzeminski a écrit :
I got some time, and reproduced the problem. Here are some logs with m25p80 debugs:
: decode_new_cmd: decoded new command:9f
: decode_new_cmd: populated jedec code
: decode_new_cmd: decoded new command:0
: decode_new_cmd: decoded new command:0 //Getting flash Id in above 4 lines -> OK (but missing CS)
Found sst25vf016b compatible flash device
: decode_new_cmd: decoded new command:6 //Write enable, command without payload, so it is ok : decode_new_cmd: decoded new command:1 //Write to status register, guest sends data
INFO: spi0.0: sst25vf016b (2048 Kbytes)
INFO: spi0.0: mtd
  .name = spi0.0,
  .size = 0x200000 (2MiB)
  .erasesize = 0x00001000 (4KiB)
  .numeraseregions = 0
Segmentation fault (core dumped) //Here probably guest try to send some data

The root cause why m25p80 enter strange state is that CS line is not selected/deselected at all- there is missing debug from m25p80_cs. In spi transfer CS line (here qemu_irq) should be 0 before begin of every message, and set after end of transmission.
In case of simple WREN command you should see something like this:
: m25p80_cs: deselect
: decode_new_cmd: decoded new command:6
: m25p80_cs: select

Can you check spi controller model code?

I'll double check.

But why is the SPI memory/device even responding if CS is not set ?
Looking at ssi code it should not.
Flash (so the m25p80) is responding when CS line is low and it seem that this is default.

Thanks,
Marcin



Thanks,
Marcin








reply via email to

[Prev in Thread] Current Thread [Next in Thread]