Anatomy of a Garbage Fire

./bitcoin-cli getchaintips | grep height | wc -l
./bitcoin-cli getchaintips | grep height | wc -l

Guess which one is a chain that features difficulty adjustment algorithm that is at least in part independent of the blocks before it?

Recently as Bitcoin Cash’s price remained low, and volatile hashrate – due to unstable profitability for the most part – caused a stir, there was a renewed debate about the suitability of its Difficult Adjustment Algorithm (DAA) – currently set at averaging difficulty target over the last 144 blocks – in curbing such volatility. Debates of whether weighing later blocks more for a more responsive algorithm has been brought up; talks about Bobtail, an interactive protocol has been mentioned, kindling discussions about whether such an implementation changes some fundamental assumptions, and whether it’s worth it; the Chinese community has proposed shortening the block interval, which trades off some other things (mostly software overhaul investments, and smart contract migration complexity) for more granularity and smarter responses.

Such debates, while tiring for some, are mostly a good thing – they are all ideas that are at least examined extensively, and have sets of different tradeoffs that are worth educating ourselves about. We have, recently, however, came across a proposal that makes a radical claim:

We describe a class of retargeting algorithms for which the sole inter-block time input is that of the block being searched for.


What if, imagine this, Satoshi and the rest of us – literally everyone including the guys worried about Selfish Mining – have been getting it all wrong? What if mining can be an activity that is history-free ?

It’s natural to be skeptical. But we’re all wannabe scientists at heart, so let’s take a closer look anyway.

The article starts with a long description about the woes of unresponsiveness of current algorithm – which all fine, no algorithm is perfect, and there’s probably a better set of tradeoffs that can be had under any circumstances. Right afterwards, it gave us a very important definition:

Progress-freedom is distinct from being memoryless. We define progress-freedom to be the property that a miner’s chance of finding a block at any moment is independent of how much work he has already done since the appearance of block n – 1.

Hmm, sounds quite reasonable – although we’re not sure how exactly this is going to be done without also being memoryless. Note that they are only meaningfully different if difficulty requirements change on the fly, before the next block is even found! Perhaps there is some global, Avalanche or weak blocks style coordination that allows us to have a sense of time as we get farther and farther away from the previous block? That’s how Bobtail does it, after all – that might be interesting. Let’s look closer!

By “time t” we refer to idealized global wall-clock time, and we assume for the moment that timestamps are always recorded as the global time of the respective event. The accuracy of this assumption depends on the incentive framework we create. This is discussed in Section 5.

Wait, what?

For people sophisticated enough to work on this problem, we gotta assume that they know how unreliable timestamps are, right?, linked without permission

Okay, there is supposedly some incentive framework that will tackle this in Section 5. The next sections spend a lot time describing some sophisticated algorithm that’s based on local, totally-synchronized clock – “with each passing second, the instantaneous block production rate increases”. We anxiously scroll down, nodding in amazement at the superb ability of the author to tighten T to what we truly want. A cute device that separates actual difficulty from what we pretend to be the difficulty used for targeting was described in painstaking details. Let’s overlook that for a second.

…until we reach the all-important Section 5.

We doubt anyway that miners will be eager to accept a peer’s “block from the future” that was easier to find than allowed right now.
How well the committed timestamps match actual time is an emergent result of the system, but since there is strong incentive to choose the latest possible non-future timestamp, i.e. a timestamp of “now”, we have reason to be optimistic.

Wait, that’s it? “We have reason to be optimistic”?

We’re trying to solve some already annoying oscillating miner-gaming situation, and here we are staring at the grim possibility that under the new system, miners can manipulate timestamps any dozens of ways, mass-ditch the chain the more centralized they are until profitable (due to this mechanism explicitly not being memoryless), “stock up” on future blocks, and will have difficulty telling whether an incoming block is actually valid or not because you don’t know how synced yourself are – and we’re being told “we have reason to be optimistic”?

This is how you get 3793 tips, guys. It’s not an accident; testnet allows for difficulty-reset to 1 when local time hits 20 minutes, which is an trial by fire testcase of how well “real-time” algorithms work – it ain’t well. Especially without coordination. Especially when the author doesn’t even try to take it seriously.

Maybe the paper has room for improvement. I don’t know, I can’t rule things out completely; people’s ingenuity has always surprised me. Perhaps there’s a hidden corollary among the lines that I overlooked, and if so I apologize.

But as far as a difficulty adjustment algorithm goes – even as an overall description – I’m pretty sure the paper is garbage fire. You do not need to have fully reproduced its results and examined each algorithm closely to know that – some things immediately jump out. Note that:

  1. For a paper that purports to make radical modifications to difficulty adjustment – to some, arguably the greatest innovation of Bitcoin – it failed to dissect miners’ expected behavior in detail. Which one is more important – irregular blocktimes, or risk of a fractured and dead chain? If we handwave that, we can pretty much handwave anything, including the original 2016-block algorithm. Surely no death spirals can happen, right?
  2. Even if you accept the handwave at face-value, they are themselves appallingly naive. What do you mean “we doubt anyway that miners will be eager to accept a block from future”? How do they even know who is correct? Perhaps it’s simpler to just not mine BCH?
  3. For all its grandiose claims about being “progress-free”, is it actually progress-free? Or is it based on the assumption of miners dumbly mining as they do today, with absolutely honest timestamps, without shenanigans?

The number one job of a difficulty adjustment algorithm is not to make sure your blocktime is within some tight error range ε; its number one job is to make sure miners keep mining your chain, it survives, and they’re not incentivized to violate your basic assumptions – such as supply schedule, mass censorship, or one-conf reliability – in adverse ways. Note how the Bobtail paper makes a great effort in covering incentives and scenarios; if a paper do not cover them, I don’t know what place they have within a blockchain context. I might as well just read wiki pages about control algorithms for my enjoyment.

You do not have to be a foremost expert in blockchain to know this.

“But wait, im_uname, this is unfair! It is a radically new thing, and they’re attempting to solve thing at least, no? Should we not give people who make an effort benefits of doubt?”

I generally appreciate people trying new things. As long as an honest attempt was made, it can even turn out to be a dumpster fire, and it’ll be fine; such is science. But what if you proclaim you want to build a plane, then proceed to build a grandiose wingless, engineless, alien-device-less hunk of metal?

“Don’t worry about it being able to fly; just look at this, we solved the problem of fuel consumption!”

“Excuse me? What’s the point of saving fuel for a plane if you don’t care about flying?”

“This is a new thing, don’t be negative!”

I’m not even that upset that garbage fires have been written. The history of science is long, people have and will continue to publish garbage fires long after we’re all dead. But it does get pretty upsetting that garbage fires are paraded around because they look super nice in all the wrong places, with the mob crowning authors as “foremost experts”. What kind of community do we want to be? Do we just grab any control theory grad student, have them type things in proper LaTeX, and call them an expert doing groundbreaking work now? Do we want to be taken seriously or not?

9 Replies to “Anatomy of a Garbage Fire”

  1. Nodes can immediately reject and not relay privately-mined tips with lower difficulty from the longer solvetimes because their difficulty at a given height will be a lot lower. He states the MTP and FTL limits will be very tight. MTP of 11 past blocks rule is replaced with “timestamp must be +1 after previous timestamp”. FTL can be a few seconds without harm instead of 2 hours. Some coins are already doing a real time adjustment and KMD is about to fork to one.

    I believe a bigger problem is the orphan rate at k=4. K=1.1 seems like it would be a really good algorithm.

    It will be interesting to see if his test coin with k=6 can be attacked or if orphans are a problem.

  2. >Nodes can immediately reject and not relay privately-mined tips with lower difficulty from the longer solvetimes because their difficulty at a given height will be a lot lower.

    You know they can be parked and opportunistically broadcasted , right? What mandates that they must broadcast immediately?

    It gets easier if quick-switching miners don’t bother mining at initial difficulties at all. There’s no rule mandating that rigs must take many minutes to switch.

    >KMD is about to fork to one

    I’m not terribly interested in observing this in a non-PoW context – KMD is, after all, delayed-PoW piggybacking on BTC.

    1. > You know they can be parked and opportunistically broadcasted , right?

      Nothing prevents that because I think you’re referring to what I called a “private” (aka selfish) mine. But then he has lower chain work than everyone else for that height, so it can be rejected immediately without having to follow it as a tip. Tips in regular difficulty algo at a given height have the same difficulty, so they can’t be rejected as easily.

      Many if not most of KMD’s coins use normal POW before its notaries get involved to checkpoint it to BTC (dPOW is just a timestamp). Some of its small coins were seeing 100,000x hashrate increases and it needs to increase that much in less than 15 blocks when the checkpoints occur, and to someone get unstuck when that much hashrate leave. I helped JL777 add something like Tom’s work if the solves were unusually fast or slow, riding on top of their existing Digishield.

      1. >But then he has lower chain work than everyone else for that height

        The chainwork is defined as that of the previous block, not the incoming block. Maybe read the paper yourself.

        1. The baseline difficulty G(n-1) is based on the previous. The addition to chain work should be based on Gn which is known as soon as the block is solved.

          1. That is false. Section 4 reads:

            “This change ensures that blocks with the same parent have the same chainwork”, and is clearly used in the context of deciding which tip to follow, aka “orphaning”.

            Seriously, read the paper, don’t make things up based on what you think it says.

  3. I read the paper, I just didn’t recall that quirk. I see he did it that way to prevent a battle for slightly slower timestamps that could orphan others honest timestamps. I would not do it that way. In either case, competition for cheating (forward timestamp in his, or early timestamps in mine) would negate the profits. An honest majority of miners could ignore the late blocks (in both cases, the cheater has to on average submit after honest timestamp blocks) and make cheaters always lose. We trust miners to not collude for 51% so if we can extend the trust for them to follow this protocol (don’t switch to a block that appears to have a cheat timestamp) then all seems OK. If not, it might be a little messy, but an excess of orphans should only occur for 1 or 2 blocks, i.e. requires waiting 2 more blocks for confirmation.

    1. >competition for cheating (forward timestamp in his, or early timestamps in mine) would negate the profits

      and what, have people accumulate in a new equilibrium where everyone’s cheating?

      I gotta cut it off right here as I do not have infinite time to follow your train of thought. Perhaps make a serious attempt at analyzing the incentives in various scenarios in longform, and then people can take a look at it again.

      1. > and what, have people accumulate in a new equilibrium where everyone’s cheating?

        If everyone cheats no one gets excess profit (they get fair reward distribution) and the solvetime remains accurate.

Comments are closed.