qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] gdbstub: Fix buffer overflow in handle_read_all_regs


From: Damien Hedde
Subject: Re: [PATCH] gdbstub: Fix buffer overflow in handle_read_all_regs
Date: Thu, 14 Nov 2019 11:19:17 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.2.0


On 11/8/19 5:50 PM, Alex Bennée wrote:
> 
> Damien Hedde <address@hidden> writes:
> 
>> On 11/8/19 3:09 PM, Alex Bennée wrote:
>>>
>>> Damien Hedde <address@hidden> writes:
>>>
>>>> Ensure we don't put too much register data in buffers. This avoids
>>>> a buffer overflow (and stack corruption) when a target has lots
>>>> of registers.
>>>>
>>>> Signed-off-by: Damien Hedde <address@hidden>
>>>> ---
>>>>
>>>> Hi all,
>>>>
>>>> While working on a target with many registers. I found out the gdbstub
>>>> may do buffer overflows when receiving a 'g' query (to read general
>>>> registers). This patch prevents that.
>>>>
>>>> Gdb is pretty happy with a partial set of registers and queries
>>>> remaining registers one by one when needed.
>>>
>>> Heh I was just looking at this code with regards to SVE (which can get
>>> quite big).
>>
>> SVE ?
> 
> ARM's Scalable Vector Registers which currently can get upto 16 vector
> quads (256 bytes) but are likely to get bigger.
> 
>>
>>>
>>>>
>>>> Regards,
>>>> Damien
>>>> ---
>>>>  gdbstub.c | 13 +++++++++++--
>>>>  1 file changed, 11 insertions(+), 2 deletions(-)
>>>>
>>>> diff --git a/gdbstub.c b/gdbstub.c
>>>> index 4cf8af365e..dde0cfe0fe 100644
>>>> --- a/gdbstub.c
>>>> +++ b/gdbstub.c
>>>> @@ -1810,8 +1810,17 @@ static void handle_read_all_regs(GdbCmdContext 
>>>> *gdb_ctx, void *user_ctx)
>>>>      cpu_synchronize_state(gdb_ctx->s->g_cpu);
>>>>      len = 0;
>>>>      for (addr = 0; addr < gdb_ctx->s->g_cpu->gdb_num_g_regs; addr++) {
>>>> -        len += gdb_read_register(gdb_ctx->s->g_cpu, gdb_ctx->mem_buf + 
>>>> len,
>>>> -                                 addr);
>>>> +        int size = gdb_read_register(gdb_ctx->s->g_cpu, gdb_ctx->mem_buf 
>>>> + len,
>>>> +                                     addr);
>>>> +        if (len + size > MAX_PACKET_LENGTH / 2) {
>>>> +            /*
>>>> +             * Prevent gdb_ctx->str_buf overflow in memtohex() below.
>>>> +             * As a consequence, send only the first registers content.
>>>> +             * Gdb will query remaining ones if/when needed.
>>>> +             */
>>>
>>> Haven't we already potentially overflowed gdb_ctx->mem_buf though? I
>>> suspect the better fix is for str_buf is to make it growable with
>>> g_string and be able to handle arbitrary size conversions (unless the
>>> spec limits us). But we still don't want a hostile gdbstub to be able to
>>> spam memory by asking for registers that might be bigger than
>>> MAX_PACKET_LENGTH bytes.
>>
>> For gdb_ctx->mem_buf  it's ok because it has also a size of
>> MAX_PACKET_LENGTH. (assuming no single register can be bigger than
>> MAX_PACKET_LENGTH)
>> str_buf has a size of MAX_PACKET_LENGTH + 1
> 
> Are these limits of the protocol rather than our own internal limits?

gdb has a dynamic sized packet buffer. Remote protocol doc says:

‘qSupported [:gdbfeature [;gdbfeature]… ]’
    [...] Any GDB which sends a ‘qSupported’ packet supports receiving
packets of unlimited length (earlier versions of GDB may reject overly
long responses).


> 
>> I'm not sure I've understood the second part but if we increase the size
>> of str_buf then we will need also a bigger packet buffer.
> 
> Glib provides some nice functions for managing arbitrary sized strings
> in a nice flexible way which grow on demand. There is also a nice
> growable GByteArray type which we can use for the packet buffer. I think
> I'd started down this road of re-factoring but never got around to
> posting the patches.
> 
>> The size here only depends on what are the target declared registers, so
>> it depends only on the cpu target code.
> 
> Sure - but guest registers are growing all the time!
> 
> --
> Alex Bennée
> 



reply via email to

[Prev in Thread] Current Thread [Next in Thread]