I once was working on some performance analysis and came to the developers asking how their code was consuming a disproportionate amount of time. At low load, the code was say 5% of non-idle CPU usage, but at high load, the code was now 50%. The developers noted that the code contained an algorithm that was O(n^2), but they had only run n=1,2,...10. My high load was n=1000, which exposed the scaling issues of their code. The developers had done everything right, except knowing the scope of the problem (and to be fair, their code was originally for desktops and I was testing servers).
Another time a developer came and discussed memory limitations, due to using 64-bit counters. I asked about how large the input was and learned it was under a billion elements. Well, a billion fits into 32-bit counters and now the memory consumption is cut by half. (Maybe I need more stories, as this was also in Bit Packing.)
The first step to any good performance analysis is selecting an algorithm, based on the problem (including its size). Bubble sort an array of 10 elements, but not 10,000. In fact, ask whether the problem really requires sorting at all.
I will continue to strongly urge developers to measure the performance of their code, before trying to tune their code. But part of this measurement is knowing the "size" of the inputs.
No comments:
Post a Comment