[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Ghostscript/GhostPDL 9.22 Release Candidate 1

From: David Kastrup
Subject: Re: Ghostscript/GhostPDL 9.22 Release Candidate 1
Date: Tue, 19 Sep 2017 13:42:15 +0200
User-agent: Gnus/5.13 (Gnus v5.13) Emacs/26.0.50 (gnu/linux)

Ken Sharp <address@hidden> writes:

> At 11:33 19/09/2017 +0200, David Kastrup wrote:
>>The question is what the complaint should be, namely what LilyPond does
>>wrong.  Producing large comprehensive manuals using TeX including lots
>>of example images generated using the same fonts?
> Ah, you need to be careful talking about 'images' because in PDF (and
> PostScript) images are bitmaps. I don't think that's what you mean
> here.
>>To me that sounds like a stock typesetting task with mainstream tools
>>that Ghostscript should be suitable for.
> Whoa, Ghostscript is a PostScript interpreter, it isn't intended as a
> typesetting application. All its really intended for is producing
> rendered bitmaps from PostScript.
> It does have the pdfwrite and ps2write devices, but the intention
> there is to produce output which is visually the same as the
> input. There's all kinds of stuff you can put in an input PDF file
> which we don't preserve into the output with these devices.

Ok, maybe this wasn't made explicit enough: Ghostscript here is involved
very much in our document generation.

LilyPond creates PostScript output files using vector music fonts (in
the vast majority of cases, including the manuals, mainly using its own
font sets coming with it but also some free text fonts).  Those are
converted into PDF files using Ghostscript (I think via ps2pdf).  Ten
thousands of those PDF files are usually included in one Texinfo
document generated with a TeX variant (we started out with PDFTeX, this
was switched to XeTeX and I am not sure whether we are now using LuaTeX:
the changes were mostly driven by getting the best PDF output for the

With regard to adapting to changes, the initial PostScript generation is
under our own control.  ps2pdf is of course GhostScript.  Masamichi
Hosoda has contributed a number of patches to Texinfo in our course of
changing TeX engines, and texinfo.tex is included in LilyPond's
distribution: so basically we can do changes in the area of TeXinfo
sourcecode pretty fast, once the problem is understood.  Of course, it
would be desirable if the same document workflow worked with LaTeX
rather than Texinfo, but we have a much less direct influence on LaTeX.
And the TeX binaries are slow-changing and separately distributed, so we
cannot really prescribe a whole lot there.

So the mechanisms mostly out of our own control are Ghostscript in its
ps2pdf facility, various TeX engines when including lots of
ps2pdf-generated PDF files into a main document.  For this use case, we
want a process that avoids excessive font duplication.  The process so
far involved an additional Ghostscript run removing most of the
duplicates from the TeX-generated PDF (someone please correct me if I
got this wrong).

So Ghostscript is involved in multiple steps here, and at least one
Unicode-capable TeX variant in another.  This is a combination that is
certainly relevant for more document production in the context of Free
Software than just for LilyPond.

So I don't think that LilyPond is alone in the particular requirements
served by the now removed feature or misfeature: documents including
external files will at least in the context of Free Software rely on
similar tools and mechanisms as LilyPond does.

>>But obviously you think there must be something wrong with the way we
>>are generating and including a large amount of images into one
>>Would you be willing to help us figure out a different way in which we
>>could make this work?
> Certainly! I did spend some time on a bug thread trying to help with
> moving away from using glyphshow. I'm happy to spend what little time
> I have explaining stuff that I know about.
> However, while that covers PDF and PostScript, it doesn't cover TeX.

We don't really have a way to forego Texinfo for our printed manuals.
Given the comparative importance of TeX for document preparation,
however, I think it would be good to figure out how to keep at least one
viable way open of making this work and figure out a migration path of
the involved tools to how you would optimally would want to have things

I don't think that TeX can (or should) preserve object ids when
including external PDF files, so figuring out some other reasonably
robust identity associated with fonts would seem important.

> As to there being an easy solution, I doubt there is one. Other than
> using a tool better suited to the task of producing documentation. But
> that would almost certainly mean moving away from open source toolsets
> which I imagine wouldn't be acceptable.

That would include moving away from Ghostscript, wouldn't it?  But then
we would not be having that discussion in the first place.

> Possibly not producing multiple intermediate files or not producing
> them as PDF would be an answer. From my uninformed outsider's
> perspective it sounds like you are making trouble for yourselves in
> this fashion, but that's probably a mis-interpretation.

We have switched around quite a number of different schemes and formats
and workflows and TeX engines over the course of years, all of them
having different tradeoffs in the trouble they make.

> PDF was never intended as a means of transferring, or 'containerising'
> content, its not trivial (or even possible in general) to extract
> content from, or simplify, PDF files.

And yet I seem to remember Adobe has a specification for how to write
PDF intended for embedding, haven't they?

>>I don't see where explaining the use case for which the availability
>>of the option makes much more of a difference than what you thought it
>>would does amount to "berating" you.
> I read the second email which simply said 'if you do this then you get
> a large file' and then listed a bunch of URLs as 'you've got to,
> because look'. I certainly didn't (and still don't) feel it explained
> much of the 'use case' other than 'if you do this then its a
> problem'. To which my answer is still 'then don't do that'.
> I also felt, to be honest, that the follow up was unnecessary and
> didn't add anything. Hence 'berating', possibly a poor choice of word.
> Knut's email was, to my mind, much more explanatory.

Masamichi Hosoda-san's mastery of English is not the best and he
basically has popped out of nowhere a few years ago and saved our beans
when nobody else was able to make our tools work again after they've
gone foul.  He has excelled at coaxing our tools into working properly
together and this has involved more experimentation than one would
really have expected: a lot of "standard" tools have surprisingly rough
edges when trying to put them together smoothly.

So his mails tend to focus very much on technical details he can state
something about and may sometimes fail to address detailed questions as
they were intended.

I would not put this into the "berating" class, similar to me not
calling a compiler "berating" me that keeps repeating the same error

> I really think its time to draw this to a conclusion, as the
> discussion isn't really going anywhere. I have repeatedly said I'll
> discuss it internally. I will also say that I'm now more inclined to
> restore the behavior, though with some big warnings in the
> documentation.

That would be certainly welcome, and it certainly would also be welcome
if someone reviewed our document preparation workflow and helped
straightening it out in a robust manner, either by improving our ways of
generating PostScript and using Ghostscript, improving Ghostscript, or
by fleshing out the requirements a TeX engine used for PDF generation
has to fulfill in order to be a seamless component in a document
workflow like ours, so that we can get our problems eventually solved in
upstream TeX development.


David Kastrup

reply via email to

[Prev in Thread] Current Thread [Next in Thread]