[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH?] GDB HEAD (partly) broken for GNU/Hurd

From: Pedro Alves
Subject: Re: [PATCH?] GDB HEAD (partly) broken for GNU/Hurd
Date: Mon, 13 Oct 2008 19:35:34 +0100
User-agent: KMail/1.9.9

Hi, and sorry I didn't reply back sooner.

On Monday 13 October 2008 17:40:25, Ulrich Weigand wrote:
> Thomas Schwinge wrote:
> > Ha, I, myself, am the GDB guru here ;-)!  I had a look at the log again,
> > experimented some more, and finally got it going with the following
> > patch.  However, I have absolutely no idea whether that is correct in all
> > cases, etc.  Should perhaps target_wait (a.k.a. gnu-nat.c's gnu_wait) be
> > doing that?

Thanks Thomas :-)  One thing I asked myself was, if gnu-nat.c couldn't be using
the port's id as thread ids instead of a locally auto-generated number.  Maybe
the thread id of the main thread would be preserved across execs this way, but,
this is off-scope for now.

> Adding switch_to_thread at this location seems correct to me (even though
> it should be a no-op on most targets).

You shouldn't call it on a few cases, namely:

TARGET_WAITKIND_EXITED.  The ptid returned isn't a thread, so
you could hit an internal error inside e.g.,

The _IGNORE case is actually broken today, as we shouldn't issue either
a target_resume or set_executing (..., 0) in that case.  Sorry I missed it
before.  Here's the similar handling in handle_inferior_event for reference:

  if (ecs->ws.kind != TARGET_WAITKIND_IGNORE)
      /* Mark the non-executing threads accordingly.  */
      if (!non_stop
          || ecs->ws.kind == TARGET_WAITKIND_EXITED
          || ecs->ws.kind == TARGET_WAITKIND_SIGNALLED)
        set_executing (pid_to_ptid (-1), 0);
        set_executing (ecs->ptid, 0);

- TARGET_WAITKIND_SPURIOUS.  This one's a bit tricky, as some targets
  may return ptid(-1,0,0) with it.  It looked like procfs.c can
  in some cases last time I tried looking at cleaning that up.
  handle_inferior_event doesn't switch context on it, so we could
  do the same.  But, we'll most probably not see this event
  here, I hope.

- If target_wait returns ptid(-1).  It shouldn't be returning that
  in any case where we should call switch_to_thread, though.  We
  can see this happening in the the TARGET_WAITKIND_IGNORE case --- don't
  switch threads in this case anyway.

- calling switch_to_thread before calling set_executing (..., 0),
  has the effect of not reading stop_pc.  I guess that's what we really
  want here?  Thus we avoid touching the shell's registers until we
  return from startup_inferior, which I undertand was one of the
  goals of this new loop.

Notice that we're assuming that handle_inferior_event()'s
new_thread_event handling isn't needed here in any platform.

> Could you test your patch on a non-Hurd target to make sure, anyway?
> > +  /* TODO.  How to keep this synchronized with gnu-nat.c's own counting?  
> > */
> Hmm.  It would appear that "set exec-wrapper" is currently broken with
> the gnu-nat.c target, right?

Yeah, it appears so.  Don't know if it's possible to get rid of the local
pending execs handling in gnu-nat.c.  An alternative would be to make
pending_execs a property of inferior.h:`struct inferior' instead of of
gnu-nat.c:`struct inf'.

(I also notice that when going through the shell in non-stop
mode, it would be more correct to resume all threads --- we
don't want non-stop and its scheduler-locking to apply to the shell.
Basically, non-stop should be off if there are pending execs.
This was an existing issue, and doesn't affect linux today, so I'll
just ignore that for now, as it needs more tweaking to fix.)

( hope not many issues like this appear, as we're doing more
duplication of handle_inferior_event logic :-( )

What do you guys think?  Thomas, could you try the attached patch
on the Hurd, please?  I just gave it a spin on x86-64-unknown-linux-gnu
without regressions.

Pedro Alves

Attachment: startup_inferior.diff
Description: Text Data

reply via email to

[Prev in Thread] Current Thread [Next in Thread]