The End of Moore’s Law, Code Bloat, and the True Nature of Efficiency

It has been suggested that the improvements in semiconductor and processor technology are approaching their natural limits in terms of how far the size of each feature can be reduced. Since quantum computing has not ready for widespread use, and since parallel computing processes are similarly not well utilized by the mainstream, it seems that the pace of developing faster hardware is due for a slowdown at the very least. To this point the improvements in hardware seem to have greatly outstripped those in software. Software has become more capable in terms of scope and scale, but at the cost of efficiency. Larger and more complex software systems have taken advantage of even more rapid improvements in hardware but if those improvements in hardware predicted by Moore’s “Law” slow down or pause, will the process of developing most software run into limits of its own?

There are many aspects to this question. First, there have always been applications that will push any computing hardware to its limits. Simulations are a prime example of this. Some simulations can always be made more granular in terms of time, space, or details considered, and so can always be reconfigured to soak up all available computing resources and then beg for more. The interesting thing about simulations is that, while they tend to do a lot of things a lot of times, they tend to do the same things over and over. A good simulation is very modular and uses a small set of building blocks. A good finite element code doesn’t have to be particularly big or complex, it just has to be able to repeat itself quickly. Conversely, there are plenty of smaller applications and processes that will run just fine on almost any hardware. It isn’t optimal the people get twenty or more programs running on their smartphones, but it can be done. Heck, there are probably more processes running in the background of most modern operating systems as it is. A Lot more.

Are there applications we haven’t though about yet that will put a new twist on this question? Are there problems in computing that we haven’t been able to tackle so far because they’re either too big or because we simply haven’t thought of them yet? Improvements in computing speed, communications, and storage have continually made new things possible that weren’t before. Think video, voice, virtual environments, sensor and control applications, cloud computing, and so on. Certainly there are tons of problems in bioinformatics and theoretical physics that would benefit from more computing power, but practical problems in business would clearly benefit as well.

There are whole classes of problems in business that fall somewhere in between a simulation of drug interactions with cellular mechanisms and the calendar app on your iPhone to be sure. Specialized work will always be done on the high-end applications and developers of low-end applications are less concerned with code size and speed, since almost anything that works will yield acceptable performance in most situations, and more concerned with the speed and reliability of the development process itself. Enterprise business systems themselves can get very large. They can generate specific optimization problems that are as voluminous and complex as anything a genetic researcher or theoretical physicist can dream up, but their biggest problem often have to do simply with scale. They have a lot of data and it has to be processed by a lot of machines working together. That requires a lot of coordination. It can be difficult to search through a system to find all the statuses and activities having to do with an individual transaction, which itself can spawn numerous sub-entities which all have their own processes and states. Completing a system build to perform a test of one incremental feature can take many hours, so a lot of work has to be done up front to maximize the chance of getting it right on the first try.

During the run-up to Y2K a lot of people asked why developers couldn’t have treated years as four-digit numbers from the start. A lot of money was being spent to repair or replace a bunch of systems and code when it seemed like a modicum of forethought could have obviated the need for much of it. One writer observed, however, that the decision to streamline the storage devoted to date information wasn’t just due to lack of insight into the potential problems their designs would cause or based on the assumption that many systems wouldn’t be around long enough for it to matter. Instead, the writer calculated the cost of actually storing the information in its entirety, given the systems available at the time, the volume of data stored, and the time value of money, as opposed to storing the streamlined versions. The calculation turned out a very, very large number. Now, one can argue whether the number is exactly correct but the size of the figure was pretty eye-opening, and would in any case be larger than the remediation actions taken in the late 90s.

The point is that there a lot of ways to measure efficiency and a lot of ways to economize. There are a lot of variables to consider. The iron triangle considers cost, time, and scope, and people make trade-offs among these considerations all the time. Another set of constraints is similar. People often joke, “You can have it fast, cheap, or good. Pick two.” A corollary would be, “You can have it really fast, really cheap, or really good. Pick one.” Trade-offs are made not only across variables but across degrees of optimization within any individual variable. While cost isn’t always the overriding consideration, in the long run it always is. The question is how to optimize on it.

When I hear people complain that developers aren’t properly mindful of saving every last clock cycle when they code I have to take it with a grain of salt. Yes, it’s always a good idea to use efficient algorithms and be mindful of the resources you’re consuming and yes, it’s a good idea to know what’s going on under the hood so you have a better idea of what trade-offs you’re actually making and yes, both Pareto’s and Sturgeon’s Laws apply to software development as well as they do to many other things, but in the end things get optimized the way they need to be when they need to be. If the development of software, which has undergone plenty of changes and cycles itself over the decades, runs hard into limitations imposed by a slowdown in the development of hardware, then people will do what they need to to figure it out. They probably won’t do it much before then.

If you see the possibility to optimize something before everyone else finds the absolute need to do so, and if you can do it cost-effectively, then that’s an opportunity for you, right?

This entry was posted in Software and tagged , , , . Bookmark the permalink.

Leave a Reply