← Back to team overview

mimblewimble team mailing list archive

Re: Elapsed-Time-Scaled Block Size

 

Wouldn't this create an incentive for a miner to include all transactions
with fees attached from the mempool and publish the block with a time stamp
enough in the future to allow that size of block?
On Wed, Nov 22, 2017 at 10:57 PM Philbert Wallace <
philbertwallace4@xxxxxxxxx> wrote:

> Thanks for the reply. It's an interesting thing to think about. Sending
> from mobile, so please excuse the inline response below:
>
>
> On Nov 22, 2017 4:24 PM, "Tomas Juočepis" <tomasjuocepis@xxxxxxxxx> wrote:
>
> Responding to Philbert:
>
> "To be on the safe side, it would probably need to be based not so much on
> the time_from_last_block, but more on a median timestamp of the last N
> blocks"
>
> The problem with using the median in this situation is that the guarantee
> of cumulative size limit being the same as would be in the
> constant-block-size case goes out the window. Also, it's more complex to
> reason about and figure out the limit in one's head. The big benefit of
> using the elapsed time since last block is its simplicity.
> Also, can you elaborate on "to be on the safe side"? How would this be
> safer?
>
>
> On further thought I'm inclined to agree with you here. If it's not
> necessary to over-complicate it with a median like mechanism, then let's
> not :-).
>
> That being said, since timestamps for blocks can be slightly out of order
> relative to block height, I imagine there is at least some risk that using
> just the time from last (tip of the chain) block might open some attack
> vector that we haven't thought about yet.
>
>
>
> "If I understand you correctly, we're basically saying that if hash power
> cut in half all of a sudden, then as soon as the protocol/nodes realized
> it, the allowable blocksize would be doubled."
>
> The way you put it doesn't sound right. Protocol/nodes do not look at hash
> rate to realize it was cut in half and react to that. Block size limit is
> just a linear formula of timestamp delta. A better way to look at it is
> saying if block was not found for 20 minutes, then the valid block size
> limit for the next block, at this time, is double the nominal size.
> Also, note that block times vary by quite a bit due to the nature of
> probability, so this idea applies not only when hash rate varies, it
> applies at all times.
>
>
> Sorry, my example was wrong. We are basically envisioning the same thing
> here, so I have no issue with this point either.
>
> However, due to time stamp variability now being compounded with block
> size variability, would we run the risk of increasing the orphan rate as
> the max allowable block size increases as the time from last block
> increases?
>
>
>
> "The mechanism would need to be symmetric (more time_from_last_block -->
> larger max_blocksize, and less time_from_last_block --> smaller max
> blocksize)"
>
> Yes. Linear scaling is symmetric around the nominal size in that sense. So
> no issues here.
>
>
> "then when they [large chunk of miners] returned the max blocksize would
> shrink down to less than it was before they left."
>
> Can you expand on what you mean here? If hash power returns to the same
> level as before it left, block size limit would be the same on average.
> This is because the proposed size function does not accumulate past the
> last found block. For example... Hashing power drops 75%. Now remaining
> miners get a block every 40 minutes on average, but still including the
> same amount of transactions. Lets say after 4 hours (~6 blocks) hash power
> returns. Block rate is back to one block per 10 minutes. Nothing is
> changed, block size limit is back to nominal size on average.
> This essentially decouples tx (and therefore network) throughput from pow
> variations. Pow variations then affect only new coin creation rate (as
> usual).
> Maybe coin creation can also po pro-rated to block size, so that inflation
> is stabilized and not gameable too. I leave that up to others to ponder.
>
> Right. If hash power returns to the same as it was before then it makes
> sense that the nominal value would be restored. So just to be clear though,
> regardless of hashpower movements, if a block is found, say, 1 minute after
> the last block time and the nominal targets are 10 mins and 1mb, would the
> max acceptable size be 100k?
>
>
> "P.P.S. Can this mechanism be gamed by large miners who collude to "cycle"
> large amounts of hash power offline, let the max blocksize grow, then come
> back online to grab the larger block?"
>
> The disincentive is two-fold here. First, as you mentioned, they're losing
> out on block reward; second, they're risking someone else getting the
> larger block with all those tx fees, so there's no guarantee they'd get
> those bigger blocks consistently.
>
>
> Agreed.
>
>
>
> On Nov 21, 2017 2:30 PM, "Dustin Dettmer" <dustinpaystaxes@xxxxxxxxx>
> wrote:
>
>> It is a cool idea.
>>
>> On Tue, Nov 21, 2017 at 12:23 PM Philbert Wallace <
>> philbertwallace4@xxxxxxxxx> wrote:
>>
>>> Hi Tomas,
>>> I agree with Dustin that we don't want to have to trust the miners at
>>> the individual level (ie: choosing timestamps) too much, but part of what
>>> these protocols do is create mechanisms where we can (hopefully) trust them
>>> in the aggregate. That being said, it's an interesting thought that you had
>>> and I've been thinking about something similar. There is a (I hope)
>>> particularly compelling argument presented at the end that I think lends at
>>> least some merit to the concept. So, let's work through it a little:
>>>
>>> To be on the safe side, it would probably need to be based not so much
>>> on the time_from_last_block, but more on a median timestamp of the last N
>>> blocks but N would need to be less than the number of blocks in a
>>> difficulty adjustment period (so this whole thing would happen totally
>>> within the normal difficulty cycle and not affect the difficulty calcs),
>>> but we'll just call it time_from_last_block for now. If I understand you
>>> correctly, we're basically saying that if hash power cut in half all of a
>>> sudden, then as soon as the protocol/nodes realized it, the allowable
>>> blocksize would be doubled. And if that same hashpower returned, then as
>>> soon as the protocol realized it, the allowable max size would be halved.
>>> The mechanism would need to be symmetric (more time_from_last_block -->
>>> larger max_blocksize, and less time_from_last_block --> smaller max
>>> blocksize) as otherwise you run the risk of growing the total size of the
>>> chain in an unpredictable fashion.
>>>
>>> One of the positive side effects (and I'm sure there are some negative
>>> ones too) is that whenever a large chunk of hashpower unexpectedly leaves,
>>> this mechanism which basically targets a "constant maximum transactional
>>> throughput rate" would disproportionately reward those miners that stick
>>> around and slog through the glut of hashpower, and proportionally punish
>>> all miners (including those that stuck around) when they return. As an
>>> example: in the recent BTC / BCH situation, where hashpower left and BTC's
>>> blocktimes became slower than normal, there was one pool, Slush's pool,
>>> that seemed to stick around BTC regardless. The mechanism we're discussing
>>> here would have allowed Slush to mine larger blocks (and thereby earn more
>>> fees for itself) during the time that the hashpower was absent.
>>>
>>> Another interesting feature (bug?) of a mechanism like this is that it
>>> would actually have punished all miners for temporarily defecting because,
>>> since Slush was able to grow the allowable blocksize while they were gone,
>>> then when they returned the max blocksize would shrink down to less than it
>>> was before they left. Those miners returning to BTC would be causing the
>>> time-between-blocks to be increasing, and therefore, under this proposed
>>> mechanism, the max blocksize would be decreasing. Perhaps such a punitive
>>> arrangement would have removed/lessened the incentive that any of those
>>> miners had to temporarily leave in the first place? While that may be
>>> desirable, its hard to know without further analysis whether this mechanism
>>> would lessen the propensity for defection.
>>>
>>> Now for the reason I do like the mechanism, presuming it could be
>>> implemented safely. If we follow this logic all the way down and imagine a
>>> situation where there is catastrophic loss of hashing power (governments
>>> shut down all known large-scale mining operations), I think it's pretty
>>> much assumed that a PoW network would do a hard-fork difficulty adjustment
>>> (or in the worst case change PoW algorithms all together) in an effort to
>>> broaden the number of people/nodes that can carry to torch forward. Let's
>>> assume the network would try a difficulty adjustment first and try to coax
>>> the current non-mining nodes into becoming miners in order to "save the
>>> chain." With a mechanism like we're discussing here in place, the desired
>>> effect of the hard-fork difficulty adjustment (namely to open mining up to
>>> a larger group of nodes) I think might happen more naturally and with less
>>> downtime/chaos (and at-minimum without a hard fork). This is because the
>>> more time that passes, the more the mempool would grow, and the large the
>>> acceptable max block size. In the extreme case, where only 1% of the
>>> original hashpower remained, then, sure, even though your node would still
>>> need to meet the difficulty requirement, the temptation to build a block
>>> that is 100x bigger than normal (and earn all those fees!) so as to keep
>>> the network's max throughput on track at a constant could become large
>>> enough that literally most (previously non-mining) nodes that didn't have a
>>> shot when compared to the "big guys" would begin mining. Similarly the big
>>> guys would be in a mad scramble to get back online and nab mining that
>>> giant goldmine of a large block!
>>>
>>> Lastly, the big winner here would be (hopefully): users. This is because
>>> while the miners contemplated in the above are playing games the honest
>>> miners were able to hold down the fort and make sure that the max
>>> transactional throughput rate stayed steady.
>>>
>>> While I'm not certain this is the outcome/mechanism we want for grin's
>>> network (especially since grin is very focused on keeping things simple
>>> right now), it at least is an interesting direction to consider.
>>>
>>> P.S. I'm also not certain that this mechanism doesn't pervert miner
>>> incentives such that they actively try to reduce each other's network
>>> connectivity. Then again that perversion exists in bitcoin as-is anyway,
>>> and so far it seems to be ok.
>>>
>>> P.P.S. Can this mechanism be gamed by large miners who collude to
>>> "cycle" large amounts of hash power offline, let the max blocksize grow,
>>> then come back online to grab the larger block? Perhaps, but since they
>>> would only be getting one block subsidy for the block (the larger reward
>>> would be due to fees), it seems they would be better off by not doing so
>>> and instead just staying online as much as possible (which is what we want
>>> in the first place!).
>>>
>>>
>>> On Tue, Nov 21, 2017 at 1:00 AM, Dustin Dettmer <
>>> dustinpaystaxes@xxxxxxxxx> wrote:
>>>
>>>> I wouldn't put so much trust in miners timestamps. I'm much more of a
>>>> fan of building systems where we don't have to trust them.
>>>>
>>>> On Tue, Nov 21, 2017 at 12:53 AM Tomas Juočepis <
>>>> tomasjuocepis@xxxxxxxxx> wrote:
>>>>
>>>>> Hello, grinners,
>>>>>
>>>>> what if block size limit of each newly found block would be linearly
>>>>> proportional to time elapsed since last block? Stated another way, nodes
>>>>> would consider a new block valid only if timestamp delta (new block
>>>>> timestamp - last block timestamp) multiplied by some parameter of size/time
>>>>> ratio is greater than the size of the new block. It seems that something
>>>>> like this could produce a more constant transaction throughput in cases of
>>>>> quickly varying applied pow rate ("hash rate") without affecting difficulty
>>>>> adjustment.
>>>>>
>>>>> For example, consider the following block timestamp deltas (in
>>>>> minutes):
>>>>> 10, 20, 30, 3, 5, 7, 5, 5, 5, 10 (total 100, average 10 (nominal))
>>>>> With block sizes being constant, we get 10 blocks worth of size.
>>>>> With block sizes linearly proportional to time delta, with slope
>>>>> coefficient set so that nominal (target) time produces nominal block size,
>>>>> we'd get same cumulative size (and therefore same cumulative consumed
>>>>> network bandwidth), but transaction rate would be more even/consistent.
>>>>>
>>>>> Let's say we can fit 1000 txs in a block designed to be mined every 10
>>>>> minutes. With the previous example, with constant size we'd have the
>>>>> following:
>>>>> t=10, 1000 txs confirmed
>>>>> t=30, 2000 txs confirmed
>>>>> t=60, 3000 txs confirmed
>>>>> t=63, 4000
>>>>> t=68, 5000
>>>>> t=75, 6000
>>>>> t=80, 7000
>>>>> t=85, 8000
>>>>> t=90, 9000
>>>>> t=100, 10000
>>>>> With time-delta-scaled size, we'd have:
>>>>> t=10, 1000 txs
>>>>> t=30, 3000 txs
>>>>> t=60, 6000 txs
>>>>> t=63, 6300 txs
>>>>> t=68, 6800 txs
>>>>> t=75, 7500 txs
>>>>> t=80, 8000 txs
>>>>> t=90, 9000 txs
>>>>> t=100, 10000 txs
>>>>>
>>>>> Any thoughts? The main issue I see is gaming timestamps, but can't
>>>>> that be solved by nodes not propagating blocks until timestamps are no
>>>>> longer dated in the future? Miners could still risk and add a timestamp few
>>>>> minutes into the future, but they would risk their block being orphaned if
>>>>> another miner finds another block with a valid timestamp, thus propagating
>>>>> through the network (before the future dated block can propagate).
>>>>> --
>>>>> Mailing list: https://launchpad.net/~mimblewimble
>>>>> Post to     : mimblewimble@xxxxxxxxxxxxxxxxxxx
>>>>> Unsubscribe : https://launchpad.net/~mimblewimble
>>>>> More help   : https://help.launchpad.net/ListHelp
>>>>>
>>>>
>>>> --
>>>> Mailing list: https://launchpad.net/~mimblewimble
>>>> Post to     : mimblewimble@xxxxxxxxxxxxxxxxxxx
>>>> Unsubscribe : https://launchpad.net/~mimblewimble
>>>> More help   : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>
>

Follow ups

References