Are We Entering a New Era of Computing?

I wrote last Wednesday’s piece to provide a beginning context to show how developers should be aware of the underlying activities in their target environments. I plan to do a couple of posts discussing how much duty cycle (CPU activity) is taken up by the browser, OS, and communication processes when examining all that might go on in a client-side application. Web pages with simple applications are one thing, see the basic, pure JavaScript, graphics, and DOM manipulation examples I’ve worked with up to now, but adding in numerous frameworks (jQuery, AngularJS, etc.) starts to add overhead of its own. You can write a really small web app that uses a pile of frameworks and mostly you’ll just pay for it in download time (at least initially), but you can write a pure JavaScript app that takes everything the browser, OS, and CPU has and ask for more.

When I first started writing Level 2, model-predictive, supervisory control systems for steel furnaces the combination of applying super-fast numerical techniques with faster processors showed me that I could implement some amazing real-time control solutions that hadn’t been possible previously. That said, depending on the geometries involved, I could set the system up to consume about fifty percent of total CPU resources (memory usage was essentially fixed). Most of the duty cycle was devoted to calculating matrix solutions, so I plan to write a few tests that do the same thing to get a feel for just how much power I have available in the browsers of different platforms. I’m thinking PC, Mac, iPhone, Android, and possibly a Raspberry PI-class device. Who knows, maybe I’ll see if I can get my old Palm Pre to work. The matrix exercise is also meant to illustrate the automatic code generation process I’ve discussed.

My point in thinking about these issues is not only to see where things are going in terms of developer mindfulness of resource consumption as the computing landscape changes, but where things might be going longer term. This article suggests we are just approaching a major turning point, though one more interesting than what I discussed last week.

I learned computing on the big DEC time sharing systems at Carnegie Mellon but also played around with little machines running BASIC. (I once did a homework assignment for my Numerical Methods class on a Radio Shack MC-10 Micro Color Computer that I picked up for all of fifty bucks. I printed the code and results out using a 4-inch-wide TP-10 thermal printer. I still have both items, as well as a 16K RAM expansion module, and as of a few years ago it all still worked. The biggest problem now is finding an analog TV to hook it to…) The more fortuitous development was working with the first generation of IBM PCs starting in my junior year, which was essentially at the dawn of the PC era. I successfully rode that wave through the early part of my career, working initially with desktop tools and moving on to networked, enterprise, and client-server systems. I used the web heavily but didn’t consciously think about developing for it until very recently, but my goal was less to learn everything about web tools and techniques than to learn about the processes and parameters.

I love developing software of all kinds but there are two things I love even more. One is working with software systems that control physical devices in the real world, or at least model, automate, or control complex procedural activities. The other is learning about new systems and processes so they can be characterized, parameterized, reorganized, and streamlined to solve a customer’s problem more efficiently. I can write the solution, I can manage the solution, but I really, really like to do the analysis to find out what solution is needed and iteratively work with the customer to continually review and improve the solution to meet their needs.

This kind of thing can happen in a couple of different contexts. I’ve developed systems through most of my career on a project/contract basis. Get one assignment, figure out the requirements, design it, build it, install it, get it accepted, go on to the next project. Even if I was doing the same type of project over and over again and could improve things incrementally each time the efforts were still treated as individual projects for specific customers. As such the interaction and feedback was always in a tight loop with one or just a few people. Later in my career I began to work on more of a continuing/program basis. In those cases there was a software tool or framework that was continually expanded, modified, and adapted to whatever type of analysis was called for. Those engagements might have been longer term but they were still in service of individual customers. A different way to approach things is to release software to an open market where there are many customers (for an OS, productivity tool, web service, or the like). In these cases the feedback is not from a defined set of individuals but from a diffuse set of users and communities with potentially varied needs and expectations. I worked at one company supporting that kind of customer base.

In truth the taxonomy is not so clean as I’ve described, and the situations I’ve worked in have been so varied that there were elements of each category in many of them. Some products/projects/programs have many layers of customers. For example, an organization that develops a large, business automation framework might have as “customers” the solution developers within the company, external solution providers, the end customers to whom the solutions are provided, and the customers those customers are leveraging the solution to serve.

The linked article suggests that there was a desktop computer wave and a web/mobile wave, each of which ran fifteen years-ish. The network-to-web-based wave kind of straddles those two waves. Some people have observed that the earliest computers were one-off affairs meant to solve single problems in sequence. Then they moved to completing multiple batch jobs in sequence, and then time-sharing systems began to support numerous users. Then came the PC era, which returned the focus to the desktop, after which followed the networked era which allowed collaboration. The web-and-mobile era represents a cyclic return to the era of centralized time-sharing systems, but on a much grander scale and with a concomitant requirement for simplicity from the user’s point of view.

The next wave appears to be the internet of things, which has also been called ubiquitous computing. The early devices tend to be small and standalone as people figure out what to do with them, then they get connected to each other, and then those individual devices and networks will probably coordinate with centralized compute and aggregation nodes (think Waze). The cycle will continue.

The goal, then, is to figure out how to position oneself for these future developments. The goal will be to identify applications that provide value to people; methods of parallelization, communication, and coordination that will allow this swarm of devices to provide that value in a meaningful way; methods of security that will support individual privacy and dignity; and methods of development and deployment that will help developers provide the solutions while optimizing on the right variables. It is entirely possible to find a niche developing control, analytical, communication, interface, or enterprise systems, but it’s always good to keep the wider picture in mind. Ideas inspired or updated by developments in new areas can inform everyone’s work.

It’s going to be an interesting time.

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply