[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Pan-users] Re: Bug in Headers pane?
[Pan-users] Re: Bug in Headers pane?
Mon, 4 Apr 2011 23:47:49 +0000 (UTC)
Pan/0.134 (Wait for Me; GIT 9383aac branch-testing)
Ron Johnson posted on Mon, 04 Apr 2011 17:19:56 -0500 as excerpted:
> On 04/04/2011 04:38 PM, walt wrote:
>> On 04/03/2011 11:38 AM, Ron Johnson wrote:
>>> When I go into a group, highlight some articles and then start
>>> downloading them, the unread articles' font "converts" from Bold to
>>> Regular style as each article is downloaded.
OK, downloaded == saved == read. That's normal.
(FWIW, on binary groups especially, I work with a larger cache, size set
manually in the config file to several gigs, up from the 10 MB default,
and download to cache ONLY, without reading or saving. Then when
everything's downloaded, I go back in and work with it, saving or deleting
as I wish, with much faster response since everything's already available
locally so I don't have to wait for it to download to check out whether
it's something I want to save or not. So for me, downloading does NOT
normally equal saved/read, and the above initially confused me, until I
remembered that's how pan normally works and MUST normally work, since its
10 MB default cache is far too small to handle downloading to cache and
working from there, like I normally do, without manually editing the cache
size in the config file.)
>>> However, if I go to a different group and then come back (or exit Pan
>>> and restart) then the articles' font no longer "converts from Bold to
>>> Regular style as each article is downloaded, and the articles that
>>> were downloaded while I was in a different group are still Bold.
OK, that's NOT normal.
The only time I've seen that is when a second pan is started, both working
on the same data set (not a different instance working on a different data
set, taking advantage of the PAN_HOME variable to setup separate data sets
that don't interfere with each other, as I do with separate binary/text/
test instances). In that case, the status saved by the last one leaving a
group will be retained.
But you seem to be seeing it happen with just the one pan instance
running, just leaving a group with a download-and-save job running, and
coming back to it? The download-and-save job that WAS marking as read as
it went, now is not doing so, with everything downloaded-and-saved since
you left the group again marked as unread. That's strange, indeed, and is
definitely a bug.
>> There is a long-standing bug with articles being incorrectly marked
>> read or unread, but I've never been able to reproduce it with enough
>> regularity to track it down.
>> If you can reproduce it 100%, please write down the algorithm and post
>> it here so we can finally get it fixed.
> Exactly what I specified in the OP work *every* time.
> Also, a bug that I've know about for some time is that Marked Read
> status don't get flushed to disk until you change groups or politely
> exit. The power flickered this morning and when I went back into pan,
> all the articles that I had marked as read were now unread.
That's deliberate, and actually an improvement from how the pan rewrite
used to work, only flushing at pan exit. I fought hard to get it back to
behaving as old-pan had, at least flushing when the group changed. That
allowed me to quickly switch groups back and forth every so often, locking
in the state when I did so, instead of having to quit pan entirely in
ordered to lock it in.
Charles argued that flushing at every change was a potential performance
issue, and that such frequent writes carried a risk of its own, since a
crash in the middle of a write could mean loss of data as well --
potentially loss of the whole index file thus loss of status for ALL
groups, not just one. Doing it only when switching groups was at least
predictable, and allows users to reload the group (actual switching groups
isn't needed, I discovered, simply clicking on the group as if you're just
switching to it, reloads it, flushing current state to disk) periodically,
to flush state.
Something between those two extremes, flushing state say on a 5/10/20/30-
minute timer. or by count, every 20/50/100/whatever posts read, or some
such, could be argued. That wouldn't have the risk or performance lag of
doing it every post, but would still avoid losing an hour's worth and
hundreds of posts worth of state if someone forgot to reload the group
every so often.
But it wouldn't be as predictable as reloading the group manually,
either. If the crashes are somewhat triggerable, say by doing something
specific in another program, bringing down the entire system including
pan, one can learn not to manually switch or reload groups at the same
time as one does whatever other risky action. But some other automatic
timer would arguably be much less predictable and thus much more likely to
eventually happen at the wrong moment, when the other risky action had
just been triggered as well.
(The risk for me was at one point associated with an unstable then new xorg
compositing extension, with the kde3 kwin compositing manager having a
memory leak so it ate more and more memory until it crashed all of X,
naturally taking pan along with it. Once I figured out what was
triggering the crash, I could track it and manually restart the kwin
compositing manager when things started getting wonky, before an actual
crash. But in the mean time I was very frustrated with pan, as X going
down more than once took pan and several hours worth of unsaved state with
it! Getting the save-on-group-switch functionality back at least allowed
me to limit the damage, without having to fully quit and restart pan every
so often. From old-pan, I had been in the habit of switching groups
periodically to force the save, but that quit working with the rewrite and
I repeatedly lost a LOT of work, until I persuaded Charles to at LEAST
save on group switch, once again, as old-pan had done, so my by then
habitual switch-groups-periodically behavior again at least limited the
amount of work I was losing.)
>> I've long suspected that the problem occurs when using multiple
>> servers, but that's just my gut feeling. Wish I could prove it.
> Nope. Just one. Happened when I only used my ISPs news server and
> happens when I use a only use a premium server.
Interesting. For you it's 100% duplicable, tho? Up until you leave the
group, it's fine. Then when leaving the group, the auto-save-state kicks
in (as expected), but the ongoing status then doesn't seem to be tracked
any more, so when you come back to the group, the status of the job
continuing in the background isn't updated.
I haven't actually used pan for binaries for awhile, and as I explained,
the way I do so is somewhat different, so I probably wouldn't see this
behavior in any case. However, it now sounds like a bug with enough info
to be useful to file, at least. Can others duplicate?
Meanwhile, there's another possibility as well. That of a local issue due
to a bad filesystem. What filesystem type are you running on the
partition storing the pan data? Have you tried doing a full fsck
(preferably while unmounted) on it, recently? If so, was any filesystem
damage detected and corrected?
If you haven't run a full fsck on it recently, please do. If there's any
damage detected and corrected, run pan normally for a session or two and
see if the problem persists. Then run another fsck and see if there's any
further damage detected and corrected.
Also, if it's convenient (as it could be if /home or the like is on a
separate partition), you might try backing the data on the partition up,
trying a different filesystem (reiserfs vs ext3 vs ext4 vs jfs vs...), and
restoring the data there. Obviously this'll be quite a bit of extra work
and may not be possible (at least easily possible) at all if the whole
system's on a single partition, but if it's happening on two different
filesystem types, without more errors being found between fscks (which
would indicate that the disk itself may be going bad), that pretty well
eliminates both the hardware itself and the filesystem behavior as issues.
I've seen filesystem issues carry over into pan behavior before, as could
be expected given the stresses pan running on binaries puts the disk and
filesystem too, if you think about it, so definitively eliminating that as
a possibility is a good start, if it can be reasonably done.
Duncan - List replies preferred. No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master." Richard Stallman