mimblewimble team mailing list archive

mimblewimble team

Mailing list archive

Message #00145
Re: [ignopeverell/grin] Difficulty adjustment algoritm (#62)
Of course you're not overstepping! :)
While looking at various difficulty adjustment algos, what I've found is there's very little research in the area, even empiric. I'm not sure there's an ideal algo either, given that you're trying to make a decision based on imperfect and insufficient information. But I this hope particular corner of our field sees more attention in the future.
 Igno
 Original Message 
Subject: Re: [ignopeverell/grin] Difficulty adjustment algoritm (#62)
Local Time: June 19, 2017 2:34 PM
UTC Time: June 19, 2017 9:34 PM
From: notifications@xxxxxxxxxx
To: ignopeverell/grin <grin@xxxxxxxxxxxxxxxxxx>
Ignotus Peverell <igno.peverell@xxxxxxxxxxxxxx>, Mention <mention@xxxxxxxxxxxxxxxxxx>
If I am overstepping my bounds, please let me know.. I come from the
aerospace industry. I've meant to write a detailed whitepaper on my idea
for difficulty adjusting but I haven't. I'll outline briefly here. I dont
expect anyone to want to use this idea, but I would love to implement it.
The algorithm deduces a statistical profile / normal distribution. Given
this normal distribution, if you have an unusually fast black, or slow
block, you can tell the "probability" that it was actually a hash power
increase, or just a statistically abnormal block. These statistical
gaussian distributions are able to be used to compute current difficulty,
and rateofchange of difficulty (2state kalman filter). They are also
hard to fool, because if hash power has a large stepfunction response, it
can respond to that (in both directions) using statistically significant
metrics.
Kalman filters are an entire field in and of itself. Anyone wanting to
learn a bit about them can check out this wonderful online book (Chapter 4
is a good place to start):
https://github.com/rlabbe/KalmanandBayesianFiltersinPython/blob/master/04OneDimensionalKalmanFilters.ipynb
Thank you all for the work you are doing. I am excited about the future.
On Mon, Jun 19, 2017 at 1:29 PM, Edgar Cloggs <notifications@xxxxxxxxxx>
wrote:
> Hi Igno et al.,
>
> How bad of an idea is it to utilize several difficulty adjustment
> algorithms? This gives us a chance to utilize the outputs of each to come
> to more complete agreement on increase/decrease in difficulty?
>
> Adjustment Algorithms (*A*, *B*, *C*, *D*) modified to take in *n*
> (number of last blocks to evaluate) as aparameter:
> *A* = Dark Gravity Wave
> *B* = Bitcoin's Standard Difficulty Algorithm
> *C* = Digishield/Dogeshield
> *D* = Our own Difficulty Adjustment Algorithm
>
> Every 23 Blocks Calculate Difficulty:
>
> 1. Produce a difficulty Increase/Decrease taking into account last *23*
> Blocks
>  *e* = (*A*(23) + *B*(23) + *C*(23) + *D*(23))/4
> 2. Produce a difficulty Increase/Decrease taking into account last
> *230* Blocks
>  *f* = (*A*(230) + *B*(230) + *C*(230) + *D*(230))/4
> 3. Produce a difficulty Increase/Decrease taking into account last
> *2300* Blocks
>  *g* = (*A*(2300) + *B*(2300) + *C*(2300) + *D*(2300))/4
> 4. Produce a difficulty Increase/Decrease taking into account last
> *23000* Blocks
>  *h* = (*A*(23000) + *B*(23000) + *C*(23000) + *D*(23000))/4
> 5. *i* = (*e*+*f*+*g*+*h*)/4
> 6. *i* > 33% Down *i* = 33% adjustment Downward
> 7. *i* > 16% Up *i* = 16.6% adjustment Upward
>
> My thinking is if we can produce several difficulty increase/decrease
> averages over several different block ranges with multiple algorithms. It
> would allow us to pick a better value for difficulty increase/decrease. In
> the above scenario we produce difficulty adjustments using ~ 4 different
> timescales, at each timescale we then calculate an average
> increase/decrease using 4 different algorithms and return it for later
> processing. After each of the timescales is calculated and averaged, we can
> take the averages derived from looking at ranges: *23, 230, 2300, and
> 23000* then use these averages to calculate a better difficulty change *i*,
> if *i* is a decrease greater than 33%, *i* = 33% decrease, if *i* is an
> increase greater than 16% *i* = 16% Increase
>
> The main downside I see, other than this being more code to maintain is
> that the difficulty readjustment calculation would be magnitudes slower
> when compared to just having one Difficulty adjustment algorithm produce a
> difficulty over only a single range.
>
> Cheers,
> Edgar
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/ignopeverell/grin/issues/62#issuecomment309544551>,
> or mute the thread
> <https://github.com/notifications/unsubscribeauth/AAMvocjpi4zeB7qmuGSKlsQIX3bzfks5sFswSgaJpZM4N7vVX>
> .
>
—
You are receiving this because you were mentioned.
Reply to this email directly, [view it on GitHub](https://github.com/ignopeverell/grin/issues/62#issuecomment309580430), or [mute the thread](https://github.com/notifications/unsubscribeauth/AV3YyZQ708jobpJCi9QxLQmv9I_9up9ks5sFumDgaJpZM4N7vVX).