[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Taler] [CFRG] RSA blind signatures
From: |
Jeff Burdges |
Subject: |
Re: [Taler] [CFRG] RSA blind signatures |
Date: |
Thu, 25 Feb 2021 07:44:54 +0100 |
> On 25 Feb 2021, at 00:25, Christopher Wood <caw@heapingbits.net> wrote:
>> If I recall, RSA-PSS depends upon signer randomness for its security
>> arguments. As such, one should ideally not base an RSA blind signature
>> off PSS but instead specify a full domain hash (FDH).
>
> Is this a useful distinction? Blind RSA in general requires randomness for it
> to be useful (as you carefully point out above).
That’s randomness by the token holder. I’m taking about randomness held by the
issuer.
Bellare and Rogaway suggested PSS over FDH because PSS provides a tighter
security argument than FDH, due to the signer providing randomness, i.e. purely
a provable security reason.
https://web.cs.ucdavis.edu/~rogaway/papers/exact.pdf
> In any case, the rationale for PSS was two-fold:
>
> 1) It's widely supported in libraries. (To my knowledge FDH is not widely
> supported... yet.)
We verify RSA blind signatures roughly three times in a typical token
deployment, once by the user after token issuance as part of the signing
algorithm, once by the merchant upon tokens being spent, and once by the
exchange upon tokens finally being redeemed. As the signer code must change,
only the merchant code justifies a gymnastics for support of existing libraries.
> 2) One can basically replicate FDH with a zero-length salt, even though some
> APIs make it difficult to do so.
Is a zero length salt secure? It’s likely "less" secure than an FDH’s
rejection sampling, but do we know if a security proof exists or if it de facto
becomes roughly PKCS#1 v1.5 like?
We cannot abuse the salt to perform rejection sampling since presumably the
three components in PSS’ output, maskedDB, H, and bc all avoid the high bit,
which gets set like 1/3rd of the time.
Jeff
p.s. An FDH consumes the input message using hash function with extensible
output, like shake, blake2x, ad hoc hash plus stream cipher constructions, or
strobe https://strobe.sourceforge.io and then produces first the low order
log_2 n + 1 - k output bits L where k = 128 or 256, and then it produces the
high k output bits H using rejection sampling, i.e. loop until (H || L) < n.
It’s probably fine if the whole output were rejection sampled all at once,
slightly slower but the performance loss sounds minor given the arithmetic.