Being a performance engineer, I always appreciate when others give solid guidance for how to write performant code. One such work is Performance Anti-Patterns. While you can read this article, there are a couple of things I'll reiterate here.
First, performance should be part of the entire development process. Trying to optimize code at the end of the cycle limits the options that are available. Conversely, adding micro-optimizations early is potentially wasteful as future development may negate the effect.
Second, benchmarks are key to proper measurement of performance. Establish what are the important scenarios for the application. And the metrics of interest for the benchmark. Is the goal to increase throughput? Or perhaps cycles per request? Power consumption at a given load? Without a specific measurement of performance, a benchmark just serves as an application that is run regularly.
This also defines the common case. Take a storage driver and three operations: read, write, and recovery. In all likelihood, the read and write operations occur far more often than recovery. If a disk fails, does it matter if the user is notified 10ms versus 20ms after the event? Does it matter if a read takes 10ms versus 20ms? Spend the effort on optimizations like-wise.
Third, think parallel. This requirement is beginning to sink in; however, just being multi-threaded is not enough. How many threads? One per task, CPU? Is the data properly partitioned? A scientific application will operate very different from a web server.
Finally, there is a cost and trade-off to performance. Reliability and security are obviously paramount. But also the readability of the code and future extensions to the application. Taking these elements into account can distinguish performant code from truly elegant implementations.
No comments:
Post a Comment