[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Emacs contributions, C and Lisp

From: Óscar Fuentes
Subject: Re: Emacs contributions, C and Lisp
Date: Wed, 26 Feb 2014 23:34:53 +0100
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3.50 (gnu/linux)

Eli Zaretskii <address@hidden> writes:

>> > And what happens if you add
>> >
>> >   B baz(char);
>> >
>> > to the above -- will it show 2 candidates or just one?
>> Dunno.
> I suggest to try (I did).  Maybe then you will be less radical in your
> judgment of having N+1 candidates when only N are strictly needed.

Adding an overload to just make an specific case work on certain
completion package is unacceptable, to say it mildly.

>> As David Engster explained, CEDET does the simplest thing (take
>> one of the overloads and ignore the rest.)
> I see nothing wrong with doing the simplest thing, if it gives
> reasonable results.  This is an engineering discipline, not an exact
> science; compromises are our bread and butter.  I'm sure you are well
> aware of that.

Yes, I'm aware of that. That's the reason why when I re-try CEDET every
year, I use it on the most simple parts of my code and then decide that
it is not mature enough for even that usage.

[Lurker: CEDET can be productive when your code is simple enough (or if
you have a high tolerance to completion failures.) There is plenty of
C++ code like that out there, so don't be discouraged by my experience
and try it yourself.]

> So I don't quite understand why you decided (without trying) that none
> of the existing solutions can be extended to fit the bill.

How do you know that I didn't tried?

> Are you seriously claiming that clang is the _only_ way to go? I hope
> not.

On terms of reduced effort, it is the easiest way by far.

>> > ECB also supports mart completion.
>> For C++? Not really.
> How do you know?  When did you last try?

A few hours ago, as described on this very same sub-thread. See above,
you can see a test case where it fails and you discussed it. Apart from
that, I try it every year or so. What makes you think that I'm talking
about CEDET without having experience with it?

> If not recently, perhaps it got better since then?

Surely it got better, but not enough, as demonstrated two messages ago.

> Did you attempt to analyze what is missing and
> how hard would it be to add that?

I'm no compiler expert, but as stated multiple times by now, for
expecting CEDET to work on modern C++ code bases the required effort is
*huge*. And that's suppossing you are a compiler writer with experience
implementing C++ front-ends.

> Who said it was slow _today_?  I found complaints from 2009, are you
> really going to claim they are still relevant, without checking out?

Again, why do you assume that I didn't tried?

IIRC last time I seriously tried CEDET (a year ago) it was fast enough
(although it missed/confused most symbols) on my projects, which are on
the tens of thousands of lines (whithout external libraries). There was
a perceivable lag on each completion request while working on a
destktop-class machine. Other C++ projects which I tinker with are two
orders of magnitude larger than that.

But the important point here is that the most time-consuming analysis
features seem missing from CEDET.

>> > The only Emacs package for this that I could find is proprietary
>> > (Xrefactory).  Do you happen to know about any free ones?
>> No. My knowledge is far from exhaustive, though.
> Then perhaps the assertiveness of your opinions should be on par with
> how much you know.

What are you talking about? What relevance has on this discussion my
knowledge of available tools for Emacs?

> Statistics doesn't understand anything about the underlying phenomena,
> and yet it is able to produce very useful results.  IOW, we don't need
> to understand C++, we just need to be able to do certain jobs.
> Understanding (parts of) it is the means to an end, and that's all.

A C++ source code analysis tool has no need to understand C++?

>> >> IIRC I already told you this a few weeks ago, but I'll repeat: a C++
>> >> front-end (even without code generation capabilities) requires an
>> >> immense amount of work from highly specialized people, and then needs
>> >> improvements as new standards are published.
>> >
>> > Only if you need to be 110% accurate,
>> Since 100% is perfect, why should I wish more? ;-)
> I don't know, you tell me.

I detect a tendency to hyperbole and all-or-nothing argumentation on
your messages. In this case, my emphasis on accurate results is
represented by you as a 110% requirement. This is not constructive.

>> > which is certainly a requirement
>> > for a compiler.  But we don't need such strict requirements for the
>> > features we are discussing, I think.
>> A defective refactoring tool can easily cause more work than it saves.
>> It can introduce subtle bugs, too.
> "Defective" is a far cry from "non-strict requirements", don't you
> think?

A tool that fails on some cases is defective, unless you described its
shortcomings and advertised that it doesn't work on C++, but on a subset
of the language.

It is true that it is unreasonable to expect correct behavior on
concocted cases or even the rare ones, but anything less than that is a

>> >> "we cannot" isn't the right expression. "we are not allowed" is the
>> >> correct description.
>> >
>> > I'm trying to keep this part of the thread out of politics and into
>> > something that could hopefully lead to a working implementation.
>> I'm not interested on politics either. I just wanted to be accurate :-)
> To what end?

"cannot" is fundamentally different from "not allowed", if you are
looking for the less-resistance path.

reply via email to

[Prev in Thread] Current Thread [Next in Thread]