[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [GNUnet-developers] Why old-school C?

From: Jeff Burdges
Subject: Re: [GNUnet-developers] Why old-school C?
Date: Wed, 15 Jul 2015 11:21:45 +0200

I'm a huge fan of Rust, and plan on use it some around GNUnet, but..

It's important to remember that Rust remains immature because they're
attempting to do hard stuff well.  In particular, they have not yet
settled on the "Rust way" to handle key material : 

Rust's libsodium bindings automatically call sodium_memzero but do not use libsodium's
allocators.  Also, Rust did not yet stabilized allocators so projects trying to do
that remain messy.  Example :

It's tricky to audit Rust code that employs cryptography until this 
gets sorted out.  At the same time, one should not shy away from writing 
Rust code that employs cryptography, but you should expect to interact 
with the Rust language community rather closely, and the Rust code is
going to require maintenance.  It's more work, not less.

On Thu, 2015-07-09 at 22:49 +0800, Andrew Cann wrote:
>   * side channel attacks
>     Some things, like the number of CPU cycles it takes to execute this
>     decrypt() function, could in principle be modeled inside a programming
>     language. I don't know if any of the dependently typed assembly languages
>     let you do this.

We're not implementing new crypto primitives in GNUnet, but I'll respond 
anyways : 

In principle maybe, but in practice the languages I know about use LLVM,
including Rust, and LLVM has no plans to support this :
Actually that whole thread is interesting.  

On Rust specifically, see slides 116-117 of this talk :
Also, there is a project to produce constant-time code using Rust by
avoiding LLVM, but it's quite immature.

At present, crypto primitives are commonly written in assembler for these 

>   * scalability/performance
>     What if you could guarantee that your service will process any message of 
> n
>     bytes in O(n log(n)) time and memory. Or that a network of n available
>     peers connected in such-and-such a topology can route any message in less
>     than m hops. There are programming languages that could let you express
>     these kinds of constraints and check them at compile time.

>   * disclosure via protcols, meta data leakage
>     I'm not sure exactly what you have in mind, but if you want to prevent
>     leakage there are type theories that let you enforce things like "the 
> value
>     in this variable at time t can not effect the output of this function at
>     any future time". 

This is like when people talk about doing the proof of the Four-Color
Theorem or Classification of Finite Simple Groups using computer assisted
theorem provers.  Any real analysis of scalability or metadata leakage is
far beyond where foreseeable computer assisted provers help much.


Attachment: signature.asc
Description: This is a digitally signed message part

reply via email to

[Prev in Thread] Current Thread [Next in Thread]