← Back to team overview

mimblewimble team mailing list archive

Re: Compact blocks


The reason the kernel needs the fee is a little convoluted but seems required nonetheless.

The block reward must have some sort of delay before it can be claimed, so miners can't game chain forking and spend their rewards right away. For the reward it's easy but for fees less so. A miner could potentially zero out all the fees and claim them as a distinct (non-coinbase-style) output, without anyone knowing, and the block would still sum correctly. The only way (so far) to protect against that sort of miner fee modification is to have each transaction sign the fee (instead of the empty string) and include it in the kernel.

- Igno

-------- Original Message --------
Subject: Re: [Mimblewimble] Compact blocks
Local Time: March 1, 2017 2:37 PM
UTC Time: March 1, 2017 10:37 PM
From: john.tromp@xxxxxxxxx
To: Ignotus Peverell <igno.peverell@xxxxxxxxxxxxxx>, mimblewimble@xxxxxxxxxxxxxxxxxxx

dear Igno,

> In this discussion, I'm only interested in the size of a block when sent
> across the wire during propagation. It's desirable in this context to have
> small block sizes (in bytes) so that blocks can traverse the network quickly
> without causing too much additional traffic. As most nodes will generally
> receive the data included in a transaction twice (as the transaction is
> propagated and in a block), some obvious optimizations are possible.
> As a reminder, a Grin block is composed of the following:
> A header, including the proof of work, some Merkle-ish roots, a reference to
> the previous block hash, etc. All the header is always required.
> A set of inputs as a direct link to previous outputs.
> A set of outputs, containing a commitment and a range proof.
> Transaction kernels, containing a signature, a public key and the explicit
> fee.

Why does the kernel need the fee? Is that to be hashed into the
Schnorr signature?
And if so, why is that necessary?

> The largest piece of data by a good margin right now is the range proof (~5
> KB each). Therefore we can introduce a first optimization:
> Block gets propagated sans range proof.
> Upon reception, a node checks whether it has all the range proofs for all
> outputs included in the block.
> If any missing, the node requests them from the peer that sent the block.
> Validation can proceed, skipping the range proofs that were already known
> (assuming they were already validated by the transaction pool).

Excellent idea.

> The second largest piece of data is composed of the transaction kernels
> (~100 bytes each). We can't fully avoid sending those as they're not
> "anchored" on anything, like the range proof is anchored on an output.
> However we could send a shortened version of their hash. In this scheme,
> when sent across the wire, a block would only contain the first 16 bytes (or
> maybe even 12) of the hash of the whole kernel. Upon reception, an exchange
> similar to the one for range proofs would occur. Note that this requires the
> pool maintains a mapping of those truncated hashes with the actual kernels.

I feel less enthusiastic about this. The reduction is block size is
rather modest,
and somewhat negated by the need for extra querying traffic, which will be
exacerbated by clients submitting transactions directly to mining pool(s) for
improved aggregation, rather than through the peer-to-peer network.

> Beyond this, optimizations become non-trivial. We do *not* want to introduce
> some transaction-level hash as transactions are disaggregated in blocks by
> design. We could introduce something like inverted bloom filters for inputs
> or outputs but those are still rather small (32 to 40 bytes each) and so the
> tradeoff of space saving over complexity doesn't seem worth it at this
> point.