taler
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Taler] Synchronization and backup


From: Christian Grothoff
Subject: Re: [Taler] Synchronization and backup
Date: Fri, 16 Feb 2018 05:59:45 +0100
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.6.0

On 02/16/2018 03:56 AM, Florian Dold wrote:
> While of course more anonymity for the user is always desirable, nothing
> is gained if it the system is not usable enough in the first place.  So
> I'm gonna argue from a practical perspective now.

Generally agreed.

> By the way, my discussion with Jeff (including his suggestion about some
> bigger master secret that everything is generated from) was triggered by
> my remark that not all browsers have "flush to disk" support for their
> database operations, since they are generally optimized for performance,
> and the local database is typically just a cache of some online service.
>  Firefox has a "readwriteflush" mode for transactions, but this is a
> non-standard extension to the IndexedDB standard ...

Oh, interesting. I didn't realize that. Well, in that case, Jeff's
proposal makes more sense.  However, in that case we should probably
have some a list of 'master secrets' that we rotate, so that we don't
accumulate risk.  Basically, 0) persist master secrets, 1) use secret
#1, 2) semi-persist coins, 3) if needed, use secret #2/#3/#4, 4) once
we're pretty sure (2) has completed (backup/sync to remote), drop master
secret #1, generate a fresh one for future use, 5) semi-persist coins
from #2/#3/#4, 6) again, once persist (backup/sync) has completed, drop
master secrets #2/#3/#4, generate and persist (backup/sync) new secrets.

> But back to the discussion, in general I think the prioritization *must*
> be as follows:
> 1. Make it as seamless and "non-weird" for users as possible, by default
> 2. Have an easy way to get better anonymity, typically by trading off
> some convenience (even if it's just one more one-time setup step).

Well, in my view activating sync/backup is the "extra" step.  By
default, on Tor, the user has to manually do this (and some help text
explains the risks and how to setup your own hidden service for it). On
other browsers, the default is to ask the user to activate sync on first
withdrawal and to suggest a provider.

> Of course these two points can be a bit at odds, since the majority
> sticking to (1) might disadvantage people wanting (2), just like if only
> people who want to do something illegal were using Tor.

I don't see that being an issue if we do it as I propose above.

> For normal operation of the Taler wallet, syncing pretty much instantly
> is IMHO unavoidable for usability.  At least browsers are starting to
> have APIs [1] that allow us to judge when it's appropriate to send cover
> traffic.  If we send cover traffic just like that, people who are on
> metered connections will be very unhappy (especially on mobile devices
> dynamically switching between metered/non-metered that's very common).

I'm confused.  Specifically, I was not at all planning on syncing
instantly, at best with a brief delay: after payment, the user wants the
browser to render the page he just paid for. CPU and bandwidth directly
impact the user experience at this time. So at the earliest, I'd sync
_after_ the fulfillment page has finished loading.

But that'd be a very aggressive setting, my proposal in Git was more
like every few hours or after "bigger" purchases (or after wallet
startup if it was not synced recently).


Furthermore, the entire idea of cover traffic for me only enters the
picture with Tor. For "normal" wallets cover traffic doesn't really help
--- the network sees too much already anyway ---, so we should there
purely focus on usability and performance.  So maybe my proposal for
padding in api.git/api-sync.git is in fact excessive for the non-Tor case.

> I wonder if we can have cover traffic / cover read+write operations to
> the backup without receiving/transmitting the whole wallet, and without
> going full SMC.  Could we have some scheme where we append smaller
> encrypted blocks and eventually, less frequently compact them?

In principle, yes. But I would leave that for (sync) v2. Let's try to
get a simple but highly usable version to work first, and then focus on
the Tor (privacy) side or the incremental (performance) side.  But to
take a speculative peek: I could imagine using an HTTP method like
'PATCH' to upload an incremental block. If we require all increments to
be fixed size (1kb? 4kb?) the sync service can just append them to the
original data, and with the right encoding (i.e. ECDH+HMAC) we can later
easily scan the download from the end for such fixed-size patch-blocks,
and then everything before must be the original "big" upload. That'd
avoid the sync service from having to record (and communicate)
boundaries -- and we likely want fixed size incremental uploads anyway
to minimize information leakage.

> I'd add that it is probably more acceptable to (optionally?) add a tiny
> random delay to spending than it is to have delayed backups or
> operations that failed to back up / sync completely.  When I spend with
> a merchant, I'd expect that we first mark the coins as spent in the
> backup, wait a bit (randomly!) and then do the spend with the merchant.
> This is probably not feasible though unless we have the ability to
> append smaller updates instead of pushing the whole wallet, like I
> described in the paragraph before.

I disagree with this. I would want the transaction to go through first.
First of all, that's where latency matters, and talking to some
backup/sync server is a secondary feature for the UX, the one you notice
all the time is the merchant interaction. There, I care about every
millisecond (including, as you know, optimistic signing).  The backup is
much less critical.

One related issue here is that the *merchant* may fail on the purchase.
Then having that in the backup is just a mess.  Much cleaner to first
see how the purchase goes, possibly do refresh, and then commit the
final result asynchronously when neither time nor bandwidth are of the
essence.

Notice that even if we fail to secure the purchase data, we will get the
deposit permission back from the *exchange* if we refresh the original
coin, so no money can be lost here.


The only critical bit where we really ought to persist strongly (i.e.
with backup) first is the withdrawal of coins: here, if we withdraw but
didn't commit the planchets, the customer really looses money. So there
we must be sure that the planchet secrets are committed to disk (or
ideally to backup). For that, Jeff's suggestion of a master secret makes
sense to me, especially given your information about common IndexDB
limitations. That master secret should then be (1) backed up, (2) used
to create planchets, (3) coins withdrawn and _also_ backed up, (4)
master secret is deleted locally, and (5) master secret is deleted in
backup.

However, if one has to pay with Taler for the sync service, this will
not work for the very first withdrawal, as we need to have some coins to
pay for the backup service itself. But that would seem to be an
acceptable limitation.

> Now for (2) I would not go so far as to have people run their own backup
> service.  They could use a normal backup service, but access it via a
> state-less active relay that generates cover-traffic for them.  This
> hides from the backup service when they're online.  Such a service would
> be easier to manage yourself (who'd want to back up their backups?).

I don't think running such an active relay is easier to setup than a
well-implemented backup service. Also, the backup service itself doesn't
really need a backup: you have a copy on each of your devices (for most
people: >=2), plus the backup device itself.  That's _plenty_. How much
money would you have to hold in your wallet to require additional
backups of a wallet database with already at least 2-3 independent copies?

The additional complexity (and cost) of an active relay really just
forbids doing this, the gain is way too small.


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]