Learn how we tracked a performance bottleneck to a 15-year-old Git function and fixed it, leading to enhanced efficiency that supports more robust backup strategies and can reduce risk.
Nah, I was excited to read about the algorithmic change, but it turned out to be an obvious change. I would replace nested loops with a map too. The result is impressive, though.
Marketing departments love to make a huge deal out of this kind of thing, because they only see the big number improvement and don't really understand that this was just some dev's Wednesday afternoon.
And they are right to do so. In the grand scheme of things, it doesn't really matter how much time you spend on a problem. It's the result that matters. I remember a meme where a dev would place a "wait" function in a new feature. Than remove the wait call and call it a free update and get lots of praise from the customer.
I'm not a native speaker, but would agree that it sounds imprecise. To my understanding, that's a polynomial reduction of the time (O(n^2) to O(n): quadratic to linear) and not an exponential speed-up (O(2^n) to O(n): exponential to linear). 🤷
Colloquially, "exponentially" seems to be used synonymously to "tremendously" or similar.
and not an exponential speed-up (O(2^n) to O(n): exponential to linear)
Note that you can also have an exponential speed-up when going from O(n) (or O(n^2) or other polynomial complexities) to O(log n). Of course that didn't happen in this case.
An "exponential drop" would be a drop that follow an exponential curve, but this doesn't. What you mean is a "drop in the exponent", which however doesn't sound as nice.
They make the same mistake further down the article:
However, the implementation of the command suffered from poor scalability related to reference count, creating a performance bottleneck. As repositories accumulated more references, processing time increased exponentially.
This article writer really loves bullet point lists, too. 🤨