Quantcast
Channel: Business 2 Community » Mike Bushong
Viewing all articles
Browse latest Browse all 30

IT’s March Towards Mass Customization

$
0
0

IT is constantly evolving, from mainframes to disaggregated components to an integrated set of infrastructure elements working in support of applications. But that description is more about how individual infrastructure is packaged and less about the role that these solutions play. There is a more subtle but perhaps more profound change in IT that is simultaneously taking place: a shift in how IT architectures are actually being designed.

So what is at the heart of this change?

Single purpose infrastructure

IT was born with the mainframe. Mainframes were basically entire IT ecosystems in a box. They included compute, storage, networking, and applications. But what is most notable about these solutions is that the entire system was aimed at providing a single outcome. That is to say that the mainframe itself was a composed system that was designed with a single purpose in mind: deliver some application.

In the early days of IT, there was no need for systems to run different types of applications, so the underlying infrastructure could be tuned expressly to the requirements of the single application resident on the system.

The meaningful bit here is that the creation of infrastructure was made easier by the limiting of scope. When there is a single consumer of a service, that service can be hyper-tuned to the supporting infrastructure, and the infrastructure can be designed for very specific and well-bounded outcomes.

General purpose infrastructure

Of course, the issue with single-purpose anything is that you can only use it for a single task. When companies were consuming a very small number of applications, this was obviously manageable. As soon as you move beyond more than two or three applications though, having single-purpose systems for each one becomes prohibitively expensive.

Enter general purpose solutions.

If the first era in IT was about the system as a whole, which was in turn defined by the application, the second era in IT represents a pendulum swing in the other direction. In today’s disaggregated, best-of-breed world, the emphasis is almost entirely on the infrastructure. That infrastructure has been decoupled from the applications that run on top with the intent of allowing any application to make use of what ultimately looks like application-agnostic infrastructure.

Properties of general purpose

Truly general purpose infrastructure treats everything in more or less the same way. In a datacenter networking context, this means that bandwidth is applied uniformly across the datacenter with no real knowledge of where specific resources reside or what applications require. And because the network is agnostic to the applications, all traffic is treated in essentially the same way.

Except of course that it isn’t.

We implicitly understand that not all traffic is the same. So what we have done is design an application-agnostic transport and then applied ways to tweak it to better accommodate applications. Edge policy constructs like QoS and even ACLs are the ultimate admission that we cannot be truly unaware of what applications are trying to do. We have to have some control, else we run the risk of starving our applications of precious resources while less important applications consume them.

But not all applications are the same

The devil is always in the details, and in this case, that devil exists in the fact that not all applications are the same. They have varying requirements, at a compute, storage, and networking level. Some applications might require more bandwidth, while others are sensitive to latency or jitter or loss. Some applications have security requirements (think: HIPAA or PCI), while others are just happy to get anything.

The subtle point here is that applications don’t want a general purpose infrastructure. In fact, it is quite the opposite. Each one has its own requirements, sometimes unique and quirky.

Mass customization

If applications really are at the heart of modern IT, then the goal of building out infrastructure will have to change from providing general purpose to highly specialized and finely tuned connectivity. The networking days where you could set it and forget it are likely coming to a close. The next evolution will be customization, but not on an application-by-application basis. To be useful, customized experiences will need to be done en masse.

The important bit here is that this customization is not likely to be persistent. Trends like utility computing will dictate that resources be consumed as-needed, using the applications to dictate how infrastructure is consumed. In this model, it is impossible to predict exactly how the datacenter will behave with enough precision to be actionable. This means that not only must the infrastructure be customizable on a mass scale, but it also must be capable of rapid change to accommodate shifting conditions.

The bottom line

IT moved from single purpose to general purpose a couple of decades ago. Since then, we have been adding things to our general purpose infrastructure to make it look and act more purpose built. The natural conclusion of this trend is the rise of customization on a mass scale. The implications of mass customization are profound. It requires things like application abstraction (to tell the infrastructure what should be optimized), orchestration (to drive changes across the various IT elements), and instrumentation (to verify in the affirmative that some condition as been met).

The question for datacenter architects is how to make progress towards this end state while solving for today’s problems. One thing is for certain though: continuing to build out general purpose infrastructure for a multi-purpose ecosystem is a sure way to reach obsolescence.


Viewing all articles
Browse latest Browse all 30

Latest Images

Trending Articles





Latest Images