[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: C++11 move semantics

From: Hans Åberg
Subject: Re: C++11 move semantics
Date: Sun, 11 Mar 2018 19:10:49 +0100

> On 11 Mar 2018, at 19:06, Frank Heckenbach <address@hidden> wrote:
> Hans Åberg wrote:
>>> I don't think so. I think I was due to the implementation in C where
>>> dynamic data structures are much more effort to write. Otherwise
>>> it's better to let the programs use as much memory as they can (by
>>> system limits, ulimit, etc.) and not impose arbitrary limits.
>> But the C parser has had dynamic allocations up to the limit as far as I 
>> recall, the 1990s. And there are POSIX specs for YACC.
> I don't understand why they did that. They even say "memory
> exhausted" which is probably wrong in most cases. (There's enough
> memory available, the parser just chooses not to use it.) As the
> bash example shows, such arbitrary limits keep hitting users many
> years after (just like the Y2K bug), so it's better to avoid them
> from the start (like the C++ parsers do).

A stack of 10000 might have been thought of never be reached.

>> It the past double indirection was considered slow, but today it is best to 
>> test the specific application.
> Testing performance in a meaningful way is very difficult, today
> more than ever, especially if memory access is involved. Small
> changes in the problem size might have a big effect due to cache
> misses etc. What's fast on one machine might be slow on the next
> one. Do you want to optimize for best case, worst case, or average
> performance? Very few people have the ability to make good judgement
> here or the means (different machines) and time available to do
> relevant tests.

Some tests showed that compilers can optimize better than human. So is probably 
better to just write a well structured code, and then profile to find hot spots.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]