The downside of having been involved in software for a long time is that it’s easy to fall into the trap of not knowing all of the latest languages and techniques. One gets off into analysis and management and discovery and so on and less into the construction of the day-to-day code. This is true even if such a practitioner has a very deep understanding of the code’s internal logic, its outputs, and its data structures and can ably specify meta-code for algorithms that need to be implemented.
Getting caught up with the latest developments in programming languages and APIs has been interesting and the logic of why things have gone in this or that direction has been clear. The first thing to understand is that the constraints are different than they used to be. Memory, storage, and processing speed were the main limitations once upon a time. My professor emphasized these points during my freshman programming course and they were still important some years later when I was working on thermo-hydraulic and thermodynamic simulations in real-time systems. While those constraints never go away entirely, one can always consume all available resources and cry for more, particularly for simulations, the main constraints are now more likely to be complexity and developer productivity. Bigger and more complex systems have to be verified and maintained and development teams need to be more productive.
The way to accomplish both goals is through greater abstraction. In a sense this is the same process an individual goes through when becoming an expert at something. The practitioner subsumes more and more basic concepts and techniques into “muscle memory”, which is really subconscious “brain memory”, so his or her mind is free to manipulate higher-level concepts. While I’ve been observing the entire industry has followed the same path. Processors, compilers, and APIs do more of the optimizing (I always loved reading about the innovations brought by each generation of Intel CPUs back when I was devouring close to a dozen computing magazines every month). Languages are restructured to embody more of the concepts of functional programming, which is no more than a consistent method of replacing details with abstractions. Development environments include features as simple as syntax highlighting and as complex as various forms of autocompletion and automated refactoring. Development of user interfaces has been simplified and made more flexible and adaptive as well. The point is that developers may not write more lines of code than they used to, but each of those lines of code is designed to do a lot more.
That said, it’s not always a good idea to just accept every abstraction you’re bequeathed, even if they’re all good ones. There is something to be said for understanding what’s going on behind the scenes. Today’s “full stack” web developer is in some ways an analog to yesterday’s developer who programmed down to the hardware and managed the registers, interrupts, stack, and heap directly. It was extremely useful to know what was going on to that level or close to it with early IBM PCs and high-level languages targeted to it. The earliest programmers likely had to know as much about electronic circuits as they did about logic. I always thought my brushes with assembly languages, special memory configurations, binary file structures, low-level networking, and several different high-level languages served me well when working in any particular area. The more background you have, the better off you are.
Consider the allocation of dynamic memory. Programmers used to have to do this entirely by hand and even when compilers started automating certain heap operations it was still up to the programmer to explicitly free up memory that wasn’t going to be used any more. I encountered a language some time ago that forced the programmer to nullify or redirect all pointers to a block of heap memory before it could be deallocated. That was moderately annoying at the time because I was being “forced” by the language to “waste” CPU cycles on activities I felt I was fully in command of. Later I came to appreciate that this constraint forces the programmer to get the details right and in the big scheme of things doesn’t impose a meaningful overhead. I mean, how many cycles does it really take to go pointerToThisThingyOverHere = NULL; even if you have to do what feels like a lot of it? Modern languages are increasingly likely to automate the process altogether. The language designers figure that there will always be spare CPU cycles with which to sweep through the heap to figure out what isn’t being used any longer and get rid of it automatically. Most systems that interface with humans spend at least some of their time waiting for input so that leaves plenty of space for background processes to work their magic. More dedicated or specialized systems might address the problems in different ways.
One of my classmates in the course I took to get my Scrum Developer certification is an active manager and Scrum Master in a C# shop (the class was in Java in the IntelliJ IDEA environment). He opined that it was perfectly reasonable to ask an interviewee if he or she understood the malloc function (or its equivalent in other languages) even though the interviewee might only have worked in a languages like Java or Javascript which tend to hide such details. Given my own experiences I thought he had a point. Even if the interviewee didn’t know the answer to that particular question it would be good if they could show appreciation some other hidden details.
Memory allocation is just one example of the phenomenon of increasing abstraction of which modern practitioners may not always be aware. The same thing is going on in graphics subsystems directly (the “full stack” of graphic operations is deep and specialized) and indirectly (offloading parallel operations to GPUs), communications (digital signal processors and other communications processors), parallel computing, and other areas. Every area is quite specialized at every level so it’s clear that no one person can know all of it.
One can get a lot of work done without knowing what’s going on under the hood and how it got there but there are times when that’s important, if for no other reason than to know if some proposed innovation is actually a variation of a previous finding.