discuss-gnuradio
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Discuss-gnuradio] Updates to gr-qtgui


From: Tom Rondeau
Subject: Re: [Discuss-gnuradio] Updates to gr-qtgui
Date: Thu, 14 Apr 2011 21:40:41 -0400

On Thu, Apr 14, 2011 at 1:21 AM, Josh Blum <address@hidden> wrote:

> Without getting to deep into this, the problem is in the shared
> responsibilities between the Python world and the C++ world. I don't
> completely understand the series of events, but the crux o the problem seems
> to be who gets to the destructors first.
>

its never easy is it...



 
> In the case of your program, it's the self._qtgui_sink_x_0_win object that's
> the problem. If you don't make it a member of the class, that is, drop the
> "self." part, it _should_ work fine. My understanding, which could be wrong,
> is that as a local variable, it gets destroyed in the right order.
>

I removed the "self." and still seeing the same results.

But, you gave me an idea that there might be this unhappy deconstructor
race condition. So, I tried deleting the qapp after exec() and that made
it better. Also and rather, stopping the top block make it better; so
maybe it was getting data to draw and somebody already destroyed the qt
graphics stuff.

So, the reason I didnt stop the top block:
http://gnuradio.squarespace.com/examples

I will give this a more conclusive test tomorrow, but the fix seems
worth while anyhow.
http://gnuradio.org/cgit/gnuradio.git/commit/?id=e762abc703e3224b54466685bf51b3fa90ee8edc

-Josh

Hmm... that's very disappointing. I was unable to reproduce this on three of my machines (Core2Quad, i7, and i7 (sandybridge)) and a couple of VMs. But I did see a segfault about once in twenty on my Core2Duo. So that that for what you will.

I tried your fix and also applied the fix to pyqt_example_f.py and ran both about 50 times in a row without seeing the seg fault, so that seems to fix it. Either that, or it just reduces the probability of it occurring even more.... But I think we go with it until someone reports another problem.

Good catch, thanks!

Tom


reply via email to

[Prev in Thread] Current Thread [Next in Thread]