discuss-gnustep
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: Framework support in -make


From: Jeff Teunissen
Subject: Re: RFC: Framework support in -make
Date: Fri, 09 May 2003 09:58:50 -0400

Nicola Pero wrote:

[snip]

> > I write makefile code. ;) For framework searches, a shell script or a
> > purpose-built search executable (such as which_lib in -make) would
> > work nicely, and suitably cached (so as to only run the program once
> > for each framework mentioned in *_FRAMEWORKS) it should be rather
> > speedy.
> 
> Running a shell script or a purpose-built executable *is* slow; running
> it once for every framework in *_FRAMEWORKS would be unacceptably slow.

It's a matter of perspective, really. It's not something that *needs* to
be run more than a few times over the course of an entire build. On a
system like Windows, this can indeed be unacceptably slow, but that is
because of the process model inherited from DOS (OS/2 shares the same
problem).

> Btw, unfortunately I can't get rid of which_lib - that is one of the
> very few shell scripts still used during compilation, but I would get
> rid of it if I could find an alternative (symlinks, whatever!). :-)
> 
> I suppose what you really want is have which_lib recognize -F arguments
> and convert them into a sequence -L -l.  We can't get rid of which_lib
> anyway, so that wouldn't add any overhead - you still invoke a single
> which_lib per linking stage done, but frameworks would be specified
> using -F.  That would sort of solve my issue about -rpath slowing down
> compilation.

Yes, using which_lib for this purpose would work fine.

[snip]

> > Yes, it is easy to underestimate, because it doesn't work now. :)
> >
> > Currently, if you move a framework, it no longer works (it can no
> > longer be used to build executables, and the dynamic linker cannot
> > find it), because the symlinks are now dangling.
> > */Libraries/*/libFrameworkName.so and friends now point to nonexistant
> > shlibs -- notice that the symbolic links are relative, and of course
> > it wouldn't help if they were absolute).
> >
> > With an ELF -rpath system, this is rather easy to fix, and it can be
> > done automatically (and repeatedly) without fragility.
> 
> That could be done with symlinks as well - you could have a script
> updating the symlinks - it's a trivial task - you look up the
> frameworks, and add symlinks for the existing ones, and remove dangling
> symlinks.
> 
> Actually, that would be a much more portable script (only manipulating
> files and the filesystem, rather than object files internals!). :-)

It is more portable, but substantially more fragile and hackish.

I do want to keep symlinks for frameworks -- where _necessary_. When there
is a better method that can be used (and implementing one better method is
a step forward in implementing all better methods, because it means the
architecture has been built to provide for them), it should be, because it
is an improvement (though in some ways, not much of one).

The current system actually gets in the way of doing it right on systems
that *can* do it right. Some of my suggestions would help. Some of them
would eliminate the problems entirely on such systems.

> > > because in real life the problem is not really moving frameworks
> > > from one directory into another on the same machine, but binary
> > > distribution between machines with the same frameworks installed in
> > > different dirs.
> >
> > This is also not a problem. The point behind this idea is to make
> > frameworks self-contained, so that binary distribution becomes easy.
> 
> I see your point, but I'm not exactly talking of distribution of
> frameworks.

Neither was I, per se. It was to make all kinds of binary distribution
easy. Frameworks are a big-ole monkeywrench in the works, and I'm looking
for ways to make them not a problem. Making frameworks completely
self-contained is an important goal...that is the whole point of
frameworks in the first place, after all. My work has been in finding ways
to make that possible -- and further, to make it as easy as possible.

> I'm rather talking of distributing a gnustep application, which might
> depend on arbitrary frameworks distributed by third parties.
> 
> At the moment, you grab the gnustep application binary in the form of an
> .app file, you drop it into your Applications folder, you double-click
> on it, and the application starts (no setup required).

This does not work today, and the proof is that frameworks are deprecated
because they DON'T work properly.

> If it contained library paths hardcoded in the object file, you would
> need to hand-edit the hardcoded paths before being able to use the
> application - even if you have a script which can do it, it's a
> complication I'd avoid to our end users.

It's one I'd try to avoid too, and I have tried to avoid it. But it's
needed with the current mechanism, and it'd be needed on this mechanism on
some operating systems.

> In the current setup, frameworks are somewhat not self-contained, but at
> least applications are.  You can't drop a framework into your
> Frameworks/ folder, because it requires setup (setting up/unpacking the
> symlinks), but at least you can drop a .app into your Applications/
> folder and it works out of the box.  In your suggestion, frameworks
> wouldn't still work out of the box (you need to run a separate shell
> script btw, which is more complex than symlinks, because symlinks can be
> packaged in a tar file, so are usually very easy to package mechanically
> into package systems designed to hold files; moreover you might still
> need to setup links for headers), and applications would no longer work
> out of the box.

No links for headers. The header link, if needed, would remain inside the
framework. If all of the libraries were frameworks, */Library/Headers
would be empty.

And apps would still work out of the box.

[snip]

> That doesn't look that simpler/cleaner than a couple of symlinks in a
> standard library directory as we have now ... we remove a hack
> (symlinks), but we add another hack (all this -rpath, and -rpath
> updating machinery). :-)

We need symlink-updating machinery too, if we're going to stick with the
old methods.

Both mechanisms add complexity, and in the wrong place. I know that, you
know that...but we can't control the dynamic linker on every machine.

Symlinks are relatively simple, but they break encapsulation. They attach
the framework into the system in a way that kinda works, but can't ever be
elegant. They're also not portable beyond Unix-based systems.

Relying on the dynamic linker is more difficult, but potentially has the
greatest gain, in the case where the dynamic linker is replaced by one
that is customized to make frameworks work. If the dynamic linker isn't
suitable, we can get *almost* what we want using -rpath, and that "almost"
is closer to what we want than symlinks provide.

On some systems, this will not work. On some systems, neither will work.

[snip]

> > By running this one command at the shell, the admin has done all of
> > the work needed to integrate the framework into their system. It's not
> > much different conceptually from running ranlib on an archive. :)
> 
> Or from running ldconfig ? :-)
> 
> I prefer using a system-wide well-known utility (such as ldconfig) for
> setting up shared libs rather than running our own special setup scripts
> for our own special shared libs, which sort of makes it more custom and
> obscure.

It's already both custom and obscure. This makes it slightly more so, but
in doing so you gain a well-defined scheme such that the mechanism can be
system-independant (win32 has no symbolic links, but it does have an
equivalent of -rpath, and on systems with an improved dynamic linker it
means you don't have to do _anything_ special). On systems with symlinks
but not -rpath, something akin to the current mechanism must be used, but
the fixup tool needs to exist here too, to fix the symlinks. In most
cases, the fixup tool is still necessary. The difference is only in what
it does.

> > With the _current_ symlink scheme, you need to do somewhat the same
> > thing, but the lack of versioning bites you. You can't have more than
> > one API version extant, because the versions of the shlibs in
> > Libraries/$(ldir) are always 1.0.0.  Even without using -rpath or
> > similar, versioned shlibs for frameworks is a big win.
> 
> I agree better versioning could be good, and if you have suggestions to
> improve version support (without using -rpath or similar), I'd certainly
> appreciate.

Yes, I explained how to do this in my original post.

This should be done by placing the version name (by default, A) into the
shlib. So you have:

libFramework.so.A ->
            ../Frameworks/Framework.framework/Versions/A/libFramework.so.A
libFramework.so.B ->
            ../Frameworks/Framework.framework/Versions/B/libFramework.so.B
libFramework.so.2003A ->
    ../Frameworks/Framework.framework/Versions/2003A/libFramework.so.2003A
libFramework.so -> libFramework.so.2003A

On install, the current version is symlinked to libFramework.so. Programs
using the earlier version of the API still use the old shlib, while
newly-linked programs will use the current version.

While not being a good solution, this is still an improvement on the
current state of affairs, as long as you have symbolic links.

> > > Consider what happens when you distribute binaries: if you hardcode
> > > paths, all the frameworks should be exactly in the same location on
> > > the user machine as they were on the machine where the executable
> > > was built.
> >
> > -rpath does not hardcode paths -- it adds an explicit path to the
> > shlib search list...and that path is modifiable by anything that can
> > modify an ELF object (not difficult). Part of my proposal is to write
> > such a tool, or modify an existing tool (as I mentioned, there are a
> > few of them already out there) to suit our needs.
> 
> Ok - but having to run custom tools can be cumbersome and obscure for
> newcomers.

So is the idea of compiling stuff. There's another way to do it, of
course, using semiautomated ld.so.conf modifications (but that requires
the ability to edit ld.so.conf, and it requires that this file exists and
can be used). Instead of using -rpath when linking a library/framework,
you could add the location(s) of the framework's shlib(s) to ld.so.conf
(and/or LD_LIBRARY_PATH, but that's a hack and one that doesn't cross user
boundaries) when it is installed.

Without system-level support, there's no way to create setuid executables
(like a password changer, or a graphical "su" like OpenSesame), because
the program won't be able to find its libraries.

[snip]

> > Except for GNUstep, it's exceedingly rare for a package to use either
> > LD_LIBRARY_PATH *or* ld.so.conf -- because unlike with GNUstep, the
> > libraries are almost always installed into the standard locations.
> >
> > ld.so.conf is off-limits to packagers, as is LD_LIBRARY_PATH to most
> > of them.
> 
> I'm not convinced - when I install an RPM, most of the times, they run
> ldconfig.  They don't usually edit ld.so.conf, I agree, because they put
> all libs in standard directories.

Running ldconfig is orthogonal to modifying the global shlib search path.
For example, in Debian, you cannot modify ld.so.conf, and you cannot
depend on an environment variable being set, and you cannot modify the
PATH. These are policy violations, and make the job of packaging GNUstep
stuff more difficult. Other systems have similar rules about what you
can't do.

> But GNUstep just needs to add a few lines to ld.so.conf when the first
> package (gnustep-make I suppose) is installed - all later packages need
> only do the same as any other binary package: just run ldconfig, because
> everything is installed into GNUstep's standard locations, which are in
> ld.so.conf anyway.

It's actually a bit more complex than that. Only the System path need
exist, while the others should be created when and if a lib/framework is
installed there. ldconfig can take enough time to run without adding
unnecessary directories (says the man who has */Libraries in his
ld.so.conf).

[snip]

> Ok - I'm happy for you to try to convince me :-) and your proposal *is*
> interesting, still I'm personally not convinced yet that it's worth
> changing our current setup.

Thanks. ;)

-- 
| Jeff Teunissen  -=-  Pres., Dusk To Dawn Computing  -=-  deek @ d2dc.net
| GPG: 1024D/9840105A   7102 808A 7733 C2F3 097B  161B 9222 DAB8 9840 105A
| Core developer, The QuakeForge Project        http://www.quakeforge.net/
| Specializing in Debian GNU/Linux              http://www.d2dc.net/~deek/




reply via email to

[Prev in Thread] Current Thread [Next in Thread]