> ---------- Forwarded message ----------
> Date: Wed, 25 Apr 2018 18:30:38 +0000
> Subject: Re: Slurs do not work with Larsen articulations
> It is quite apparent that this wasn't written for random access and I
> feel that websites are much more effective and user friendly if they
> As we try to make clear, the Learning Manual is intentionally not built for random access. The Notation Reference *is* built for random access.
Let's see, "Random access" is not how people access the docs. Rather, we arrive at a specific entry point as the result of a search, which may bring us to *any* page.
Regardless of what you call it, is the NR designed to accommodate this behavior?
Breadcrumb and TOC links abound. But how do you know which ones to follow? A better way of putting it is that the NR is designed to *not* have a narrative, and to *not* duplicate content.
But those are not a design that benefits users. Rather, it benefits the maintainers. Which is an essential consideration. Nay, an existential consideration.
But let's not pretend that this method of organization benefits users. If we were able to include the context necessary to understand what is being explained, users could easily skim content they already know, to get to the content they don't. But the reverse is generally exceedingly difficult--a user cannot intuit the necessary context to understand the details on the page, and often that information is often scattered elsewhere across several links. It can seem like the proverbial rabbit hole trying to piece it together.
This is not to cast aspersions at those generous souls who have created the documentation. Rather, until we start with a clear assessment of what we have, including its limitations, we cannot move forward. So, let's start by being clear, trying to be more accurate about what the user experience is like.
Claiming that it is "designed for random access" is not really accurate. Simply having links to other pages of documentation doesn't really solve most users' problems. In my experience, it is arguable whether following the link trail is more effective than simply refining my query in the first place.
Another aspect of search-based access is that your success will be based on the terminology you use, for which there may be many similar synonyms, as well as LilyPond-specific terminology.
Especially when your issue involves the combination of things, which one do you search for? E.g., If I am trying to fix a collision between a slur and an accidental, do I look up "slur" or "accidental", or perhaps I really need to look up "tweak" or "override", or "vertical spacing", "collisions" or "priority"?
I have some ideas about how to supplement this, but they are only worth discussing if people agree that there are limitiations to the current docuentation approach. The attitude that the design we have works well for users is not a great starting point.
Develop a cross-reference index or word cloud that demonstrates the relationships between synonyms, or among related structures and concepts. Likewise, between music terminolgy and LilyPond-specific terminology. These relationships could be determined by crowdsourcing (some variant of "did you find this page useful?" feedback), by examining server logs for the links people follow, by examining tracking data (search term entry points and terminal pages). Throw some ML at the problem to create better glossaries and indexes.
Ability to search LSR for snippets and have them inserted into the page (and to remember which ones you want to see.)
Since each contextual link in the NR implies some kind of relationship between the entry and the linked content, each such relationship could be described (even if generically), and displayed at the users' request (a more/less link, or on-hover tooltip implementation.) This would help explain the relationship among entities and, based on that, help the user determine if they should follow the link.
> Experience in the past showed that there is a minimum set of information that you need to have before you can make enough sense of LilyPond to have the random access part be meaningful. If you don't have the basic concepts, you don't know how to even ask the right questions.
This is true. But it is only part of the problem. I've been using LilyPond on a daily basis for over 5 years, and I regularly have difficulty finding the information I need. At this point, I probably only need to look something up one per month. But, whenever I do, it is a crapshoot whether it will take 5 minues or 50, or yeild no result. The information is usually there, but finding it remains elusive.
> Another specific documentation design decision is that we do not describe the examples in the text preceding the example. You must look at the example input code and the example output to really understand what is going on. This is a conscious choice to keep the documentation shorter than it otherwise would be.
Brevity of documentation is not an asset if it fails to convey the necessary concepts.
Again, this is not a blanket condemnation of the docs. This pattern works well if the examples are good, in which case they can be self-documenting.
But many of our examples are more like unit tests then user test cases. Which is to say, they demonstrate the required functionality from a programmatic point of view, but are absent the musical context in which they would be used. They may be so stripped down (absent staff groups, staves, scores, books, etc.) that incorporating the example in our actual usage may be difficult. Providing multiple examples, especially if something can be defined in multiple places and ways, should be viewed as a good thing, not as clutter. Potential clutter can be reduced by UI improvements like show/hide, or other controls.
In any case, where there are examples are not clear, and better examples cannot be created, providing addtional coherent explanation should not be seen as fundamentally a bad thing.
Likewise, on the IR. Just because it is code-generated doesn't mean that we can't include examples or explanations. It just means that the examples or addtional explanation would need to be in the code comments, as, for example, Javadoc has been doing since last century.
Is it a lot of work? Of course. But that is different than saying that it can't be done, or shouldn't be done because it would not be useful.
> I don't see us doing away with the expectation of beginners reading the Learning Manual, but as Kieren said, if you'd like to contribute links that you think would help your understanding, we'd be happy to accept your contributions.
Many folks here have been extremely responsive to constructive criticism around specific suggestions to specific examples and documentation pages. In that sense, this community does a exemplary job.
But that is different than saying that only criticisms we should encourage are those that amount to a concise improvement.
I think we need to be honest about how useful our docs are, and aren't.
True, changes to systemic stuff will remain pie-in-the-sky uless someone is willing to do the work.
But we won't ever get to a coherent suggestion of what to build if we keep shooting down honest and accurate criticisms of the limitations of our docs. On the contrary, once a good idea has been developed and vetted, we will much more likely find people willing to implement it.
Thanks for all your contributions.
David Elaine Alt
415 . 341 .4954 "Confusion is highly underrated"
Producer ~ Composer ~ Instrumentalist