pan-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Pan-users] Connection "knee" Was: Policy discussion: GNKSA


From: Duncan
Subject: [Pan-users] Connection "knee" Was: Policy discussion: GNKSA
Date: Mon, 4 Jul 2011 00:15:46 +0000 (UTC)
User-agent: Pan/0.135 (Tomorrow I'll Wake Up and Scald Myself with Tea; GIT 9996aa7 branch-master)

Alan Meyer posted on Sun, 03 Jul 2011 11:27:25 -0700 as excerpted:

> I'd be in favor of keeping the GNKSA seal of approval and, as some
> others have said, four connections is plenty for me.
> 
> I'd also be curious to know where the knee of the curve is on
> connections.  Even assuming infinite capacity computers, there are still
> potentially two throttles in place on download speed. There is a max
> speed for any one connection, which could be imposed at the server end
> or in the intermediate infrastructure,
> and there is a max total throughput.
> 
> If you have, for example, a 1 MB/sec connection to the Internet,
> you can get the same number of bytes with one connection at 1 MB or four
> connections at 250 KB.  Increasing to 20 connections cannot help you at
> all unless the server, or something else in the pipeline, is throttling
> each connection to below 250 KB.
> 
> There may be small efficiencies in having multiple connections. If one
> stalls at the server end for a disk read or a context switch, or at the
> client end for a disk write, the other connections are still pulling in
> bytes.  But there is a limit to the efficiencies to be gained there. 
> They are usually not large to begin with and it's not clear that more
> than four connections is helpful at all.  Two or three may be plenty for
> that.
> 
> Does someone have data to suggest that there is a real advantage to
> having more than four connections?

Your post looks almost half-way to one of mine. =:^)  But it's a good 
explanation of the idea.

FWIW, based on what I've seen others say here as well as my own 
observation altho my major binary experience is a few years back, now, 
pan apparently decodes on the same thread as the download.  Either that 
or there's some sort of cross-thread locking (or maybe simply not enough 
net-buffering) involved that slows things down during the decode and save 
(as opposed to the simple download and cache) process.

If one does what I do[1] and increases cache size to some gigs (enough to 
cover at least one download session), then sets up to download-to-cache 
everything, goes away to work or to sleep for a few hours, and comes back 
to the completed download to sort thru it and save off what they want, 
2-3 connections can often max out a line, at least to 20 Mbps or so, and 
one may under the right circumstances.

If however, one is decoding and saving directly as all the parts become 
available locally (pan's default, especially since its default 10MB cache 
pretty well forces that mode), it's a rather different story.  Last I 
checked, anyway, with a single connection you can watch your network 
activity graph, and pan flatlines that connection while its doing a 
decode and save.  Thus, a few more connections are likely to be needed, 
to make up for this flatlining at the local end.  Back when I had a paid 
and thus full-speed account, it allowed eight connections.  5-6 seemed to 
keep the pipe full if I was decoding and saving at the same time, but 
four would leave occasional short gaps, probably when two decode threads 
got going at the same time, stalling two of the four connections.  That 
was probably on a 6 Mbit connection or so, tho I expect the experience 
would have been valid to 12 Mbit or better, except for the relatively 
higher percentage of the time spent decoding so more chance of multiple 
threads decoding at once.

But I have trouble seeing how anything above 12 connections, at least for 
a single pan instance, even on a quite fast quad-core-plus machine 
writing to fast SSD storage, could ever be useful, unless the server gave 
you unlimited connections but capped the per-connection speed so it was 
required to fill the available bandwidth.

However, it IS useful to note that some people have multiple machines, 
sometimes with several users.  I once knew a couple in their 50s, where 
he did TV program binary downloading (say 6 connections on pan, arguably 
with >20 Mbit he could run a second instance on another computer doing 
another 6, so 12, total, he didn't have it at the time, but could have if 
his connections speed was higher), and she did sewing and pattern groups, 
not so intensive, so a couple connections.  Then consider if they have an 
adult kid doing the pr0n thing, another six connections, and grandkids, 
on another computer of course with mom, doing the animal or disney 
groups, binaries, another possible six connections.  That's a total of 26 
connnections in the house, on different computers, all at the same time 
if the incoming pipe bandwidth is high enough.

I think it's both because of that, and simply because of the "connections 
allowed number" competitions between providers (likely the biggest 
reason), that providers often offer 20+ connections.  You're not supposed 
to login from two different places at once, but if they're all from one 
place, covered by NAPT or a small /28 (IPv4) or so subnet, it's likely to 
be allowed, even encouraged, especially for block accounts where the 
faster the user uses up the block, the sooner they'll buy more.  I know 
that 50s couple certainly enjoyed the flexibility of having more 
connections than was practical for a single computer to use, as it 
allowed her to do her sewing groups with just a couple of mostly idle 
connections, while allowing him to do his TV program downloads, maxing 
out the bandwidth.  (I think he had it arranged to cap his at 50 kbps or 
so below full pipe max, allowing her reasonable responsiveness on the 
sewing groups, plus for misc browsing and the like.)

So just because a server offers 20 or 50 or whatever connections, doesn't 
mean it's useful for a single news client on a single machine to try to 
use them all.  Even without the GNKSA thing, a cap of 10 or 12 should be 
a reasonable limit, with the 20 of that commit leaving enough room beyond 
that, that nobody should have reason to complain.

And if a client were properly designed to hand-off decoding to another 
thread (and the hardware had the processing and storage bandwidth to 
handle it properly), two (or even one) active download threads should 
fill even a 100 Mbps or larger pipe, perhaps with a third or fourth for 
uploading (text replies one, or binaries, two, if the pipe is say 100 Mbps 
symmetric, thus filling the pipe both ways) at the same time.

---
[1]  Well, did.  As I said I've not done binaries in some time, now, not 
because I don't want to, just other things seem to take priority and I 
don't have the time, plus I don't have a binary account now that my ISP 
dropped their news, so getting one is an additional barrier to getting 
back into it, tho I've thought about getting a big non-expiring block 
account at some point, The 1000 gig for $100 deal I've seen could well 
last me a lifetime of on-and-off downloading, and then I'd not have to 
worry about paying again, unless I lost my access info or the company 
went bust.  But I have to decide I have some time for it that I don't 
have have other priorities for, first.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman




reply via email to

[Prev in Thread] Current Thread [Next in Thread]