lmi
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [lmi] Generating pages with tables in the new PDF generation code


From: Vadim Zeitlin
Subject: Re: [lmi] Generating pages with tables in the new PDF generation code
Date: Sun, 27 Aug 2017 19:43:46 +0200

On Sun, 27 Aug 2017 16:13:53 +0000 Greg Chicares <address@hidden> wrote:

GC> OTOH, there's really nothing objectionable about adding another
GC> 'config_*.hpp' file. We already have two to support other non-free
GC> compilers that were useful in the past, and even some gnu projects
GC> contain files that exist solely to support non-free compilers.
GC> 
GC> To take another example, the 'wx_checks.cpp' changes might be
GC> something I could get comfortable with, and using a precompiled
GC> wx library might be an important benefit. But we probably should
GC> spin that off as an independent task.

 Yes, indeed, these changes could be integrated, but not at the same time
as the PDF ones. Of course, if I could avoid having a separate branch just
for MSVC compilation, it would be perfect, and maybe this is achievable now
with the modern MSVC versions (which are required anyhow as only they have
C++11 support) because they don't suffer from the problems that required
workarounds you weren't enthusiastic about accepting any more. But this is
not the most urgent task, so I guess it should be done after the PDF branch
merge. It could also be done before if you're willing to spend time on
this, but, again, just not at the same time.

GC> > GC> I did this:
GC> > GC> 
GC> > GC> mkdir --parents /opt/lmi/pdf/
GC> > GC> cd /opt/lmi/pdf/
GC> > GC> git clone https://github.com/vadz/lmi.git --branch direct-pdf-gen 
--single-branch
GC> > GC> cd lmi
GC> > 
GC> >  I know you're allergic to branches, but this is really, really not the
GC> > optimal way to do this.
...
GC> Whether it's optimal depends on the goal. If the goal is simply
GC> to get this merged, then an automated merge command is optimal.

 I think it's optimal in any case. Reviewing changes after the merge is
better than reviewing them before and automation takes care of trivial
things (like the already mentioned example of modifying a file in master)
allowing you to concentrate on the real changes.

GC> My goal is to conserve stability while evolving, by reviewing
GC> and understanding each change before committing it. The actual
GC> act of committing is trivially mechanical, and might take one
GC> percent of the total time, so it doesn't need to be optimized.
GC> What I do want to optimize is the thinking process, and in my
GC> experience the best way is to break a large whole into more
GC> tractable pieces that I can integrate one at a time with
GC> comprehensive testing.

 For me the tractable pieces are the individual commits, but it's hardly
feasible to replay all of them: there are 88 of them right now and there
will be many more before the merge. Are you really going down to this level
of granularity? In addition to being more work for you, this will mean
reviewing initial version of the code, not using external templates, which
is just not there any more and I don't see how can it be useful for you
(but keeping it in history can still be useful for me, and is definitely
simpler than whitewashing history to get rid of it).

 Perhaps it would be better to break just the final version of the changes
into pieces? The good news is that absolutely nothing prevents you from
doing this after letting "git merge" take care of all the trivial stuff.

GC> Eradicating history is not a goal: it's an anti-goal, which we should
GC> avoid if we can do so without undue convenience. For example, if you
GC> can write a simple command that would import all of the new files at
GC> once into lmi HEAD, preserving their history,

 This is possible, but not very simple because it would mean splitting each
commit in my branch into two parts: one that affects the existing files and
the other one which affects only the new one. Worse, it would be quite
counter-productive because it would break the commits atomicity. I really
don't think we should do such a thing.

GC> OTOH, 64 hunks changed in 'group_quote_pdf_gen_wx.cpp', and we can't
GC> integrate that into HEAD without careful review and testing, lest we
GC> destabilize the production system.

 BTW, here is how I would review these changes:

        $ git log -p ..direct-pdf-gen -- group_quote_pdf_gen_wx.cpp

This command would show all the commits in the branch affecting this file
with the changes done by each commit to this file (only), due to -p
(--patch) option. Notice that this is not something you can do without git,
if you just diff the 2 versions of this file, you get a diff with
185 insertions and 371 deletions, i.e. a huge number of changes which are
not easy to understand. Individual commits shown by the command above are
much simpler.

GC> And it would take considerable
GC> analysis for me to get comfortable with the change to 'ledger.hpp':

 The change to the .hpp file is quite simple, but you probably mean the
replacement of ledger_xml_io.cpp with ledger_evaluator.cpp. If so, this
change is simpler than it looks and, again, "git diff" should help with
reviewing it as it should realize that the file was just renamed.

GC> At any rate, the two strategies I see are:
GC> 
GC> (1) Let all PDF development continue to its conclusion in your
GC> personal repository, then test that thoroughly, and finally merge
GC> it all at once; or
GC> 
GC> (2) Incrementally adopt changes from your personal repository into
GC> lmi HEAD, proceeding one step at a time until no step remains.
GC> 
GC> Of those two, isn't the second clearly preferable?

 Yes, the second one is preferable, but this would require more of your
time because you might have to review changes that I later replace or undo.
If you're ready to take the risk of this happening, let's do it like this,
but please let me create a new branch, based on master, instead of the
current one, to separate the unrelated changes.

 But I'm still not sure how exactly would you like to proceed. For example,
you can't merge ledger.hpp change right now because it would, of course,
break the existing FOP-using code. You could integrate the changes to
group_quote_pdf_gen_wx.cpp and I can make a separate branch just with them.
But what about all the rest?

GC> If we adopt strategy (2) above, then there's a lot of code to
GC> integrate, and it's best for me to start now. Or do you have a
GC> a different strategy to recommend that doesn't require changing
GC> to a bazaar philosophy?

 BTW, the use of branches is required by, but doesn't lead to, bazaar
development model.

 Anyhow, please give me some time to think about starting integrating my
changes, I hadn't even started doing it yet because in my mind it was still
too early for this. But I wholeheartedly agree with starting sooner than
usual, I just didn't know that so were you. I'll try to propose something
really mergeable a.s.a.p.

 Regards,
VZ


reply via email to

[Prev in Thread] Current Thread [Next in Thread]