The Myth of Cloud Insecurity

Part 1 – The False Sense of Physical Security

As is often the case when new paradigms are advanced, cloud computing as a viable method for sourcing information technology resources has met with many criticisms, ranging from doubts about the basic suitability for enterprise applications, to warnings that “pay-as-you-go” models harbor unacceptable hidden costs when used at scale.  But perhaps the most widespread and difficult to repudiate is the notion that the cloud model is inherently less secure than traditional data center models.

It is easy to understand why some take the position that cloud is unsuitable, or at the least very difficult to harness for conducting secure business operations.  Traditional security depends heavily on the fortress concept, one that is ingrained in us as a species: We have a long history of securing physical spaces with brick walls, barbed wire fences, moats, and castles.  Security practice has long advocated placing IT resources inside highly controlled spaces, with perimeter defenses as the first and sometimes only obstacle to would-be attacks.  Best practice teaches the “onion model,” a direct application of the defense in depth concept, where there are castles within brick walls within barbed-wire fences, creating multiple layers of protection for the crown jewels at the center, as shown in Figure A below.  This model is appealing because it is natural to assume that if we place our servers and disk drives inside a fenced-in facility on gated property with access controlled doors and locked equipment racks (i.e., a modern data center), then they are most secure.  The fact that we have physical control over the infrastructures translates automatically to a sense that the applications and data that they contain are protected.  Similarly, when those same applications and data are placed on infrastructure we can’t see, touch, or control at the lowest level, we question just how secure they really can be. This requires more faith in the cloud service provider than most are able to muster.

SecurityModels

 

But the advent of the commercial Internet resulted in exponential growth of the adoption of networking services, and, today, Internet connectivity is an absolute “must have” for nearly all computing applications, not the novelty it was 20 years ago.  The degree of required connectivity is such that most organizations can no longer keep up with requested changes in firewalls and access policies. The result is a less agile, less competitive business encumbered by unwieldy nested perimeter-based security systems, as shown in Figure A above. Even when implemented correctly, those traditional security measures often fail because the perimeter defenses cannot possibly anticipate all the ways that applications on either side of the Internet demarcation may interact.

The implication is that applications must now be written without assumptions about or dependencies upon the security profile of a broader execution environment.  One can’t simply assume the hosting environment is secure, although that is still quite important, but for different reasons.  More on this line of thinking, and the explanation of Figure B and why it is more desirable in the cloud era, in my next post.

 

 

 

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on The Myth of Cloud Insecurity

The Private Cloud Pendulum

We once had a unified vision of how cloud would be adopted by the average enterprise.   With all the uncertainty around security, cost and performance of public cloud, they would naturally transform their private data centers into private clouds. Once successful in that incremental transition, they would be more comfortable with extending to the public cloud, resulting in the Holy Grail – a hybrid deployment.

We were only half right.

As events have unfolded, we see that hybrid clouds are indeed the desired outcome. However, the way-points on that journey are, for a number of cloud adoption profiles, reversed from what we had predicted. Instead of first stopping at private cloud, many skipped it entirely and went to using public cloud in spite of their own previously voiced objections. Why?

For all but the larger enterprises and the very capable mid-market IT organizations, a private cloud has often been too difficult to build and maintain. The technology existed, but it was far from turnkey. In the face of the challenging business and operational transformations that cloud demands, this was too distracting and unnecessary: public cloud was sitting there, gleamingly simple and ready-to-use without the burdens of operational hassles.

So, for these “I need it to be as easy as possible” cloud adopters, the pendulum largely bypassed private and swung to public. But will it stay there?

The costs of public cloud are tricky to pin down and manage. You are paying a premium for someone else to handle the headaches. For many projects this makes sense. For long-running activities that don’t take advantage of public cloud’s scale and global reach, you will likely pay more than is necessary.

The chickens, as they say, will come home to roost. The true cost of public cloud will become apparent, and the advantages of private cloud more compelling for the mid-market. The pendulum will return, and when it does, we’ll have evolved private cloud technologies to make them suitable for organizations looking for an appliance-like experience instead of building an IT practice around them.

But what if we’re wrong again? Regardless of how that pendulum swings, having choices and management capabilities that span multiple clouds (public, private, whatever) ensures you’ll be able to keep cost and utility in balance. The trick is to invest in tools and technologies that enable that choice;  buy and design software that is infrastructure-agnostic, dependent only upon abstracted network services of which analogs are available from many providers. That way, whether it’s now or in the future, when you’re ready for private cloud (or it is ready for you), you’ll be in a position to further expand your selection of cloud targets by adding your own.

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on The Private Cloud Pendulum

The Elusive Butterfly of Policy

I love it when visionaries bridge the gaps in their utopian depictions of The Future of IT with hand-waving explanations.  As a supposed said visionary, I plead guilty.  My most recent transgression: presenting the concept of “policy” in the context of data centers, workloads, etc., as if it were well understood and its supporting technologies mature enough for market adoption.

In spinning our tales of how great life will be when we finally complete this transformation rather appropriately (due to the ambiguities involved) labeled “cloud,” we realize we can’t tell a convincing story without the Policy character.  He’s like the Sherriff in the Wild West – without him enforcing The Law, it’s just too dangerous a place for normal, everyday people.

Why do I believe policy is a gating factor for accelerating cloud adoption?

Although it is the favorite analogy of cloud evangelists, electric service is not the same as compute-as-a-service.  Unlike the power company, which sends indistinguishable electrons into your home or business and eventually into the ground, cloud computing services require that data and intents to act upon it move across that boundary.  And data is quite distinguishable, valuable, even dangerous in the wrong hands.

And that’s why policy is not simply “important” – it is essential to the success of cloud computing. Data and data access must be managed in a controlled manner, and cloud consumers will need guarantees to that effect.  Policy is the mechanism by which the degree and type of control is specified.  Policy enforcement ensures those controls are observed.

Easy. (Did you feel the rush of air on your cheek?)  Seriously, although much progress has been made to begin to express and implement policies in IT systems, it is a largely manual error-prone process.  Still, some technologies are beginning to emerge that give us a bit of hope we can really solve the problem.

But that’s only the first chapter of a longer story in which the right kinds of policies must be crafted in order to meet the intended objectives.  That’s the poster child use case for policy – compliance.  In future posts I’ll discuss how policy and automation are mutually dependent, and how together they will help us achieve policy enforcement and compliance objectives in tomorrow’s virtual data centers.

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on The Elusive Butterfly of Policy

The Rebirth of Automation

I suppose by now it is fait accompli that cloud computing is going to revolutionize the information technology industry.  It certainly seems on track to do so.  One of the main reasons for its success is the highly dynamic and virtualized environments that most cloud platforms provide.  In these modern “data centers” every resource – from virtualized processors and storage to network configuration – can be defined, deployed, configured, managed and maintained through software interaction alone.  No more cables to plug and unplug. No more racks to re-wire just because a server is being re-purposed. RAM can be added to an under-provisioned system in seconds, and never a chassis opened. And the list goes on…

It is in this only recently realized totally virtual data center that one of IT’s oldest workhorses may finally see its full potential realized: automation. It’s the key to achieving maximum efficiency because it provides the following benefits:

  • Task Compression – This time-honored method of increasing efficiency through abstraction is at the heart of automation’s promise. Processes which would normally require many manual steps are reduced to a single invocation.
  • Speed – Not only are machines good at repeating long lists of steps, they can do it quickly! Automated processes can be made to execute as fast as possible, with no unnecessary delays between steps.
  • Repeatable Successful Outcomes – Even with the best documentation, manual execution is prone to human error, such as missing, misreading, or misinterpreting a step. Automation helps to ensure that the same set of steps is always followed, in the same order, with the same results, every time an operation is performed.

So, if automation is so great, why is it only now getting broad traction?

“Run Book Automation” (RBA) has been around for decades, but has never been an easy-to-use or widely accessible technology.  Sure, it’s been possible to fully (well, nearly fully) automate many of the most mundane, frequently used, or most error-prone manual processes through brute force of a million different API’s and a few special-purpose proxies and operating system agents.  But those implementations – with many moving parts under the control of as many different companies that have little incentive to ensure their pieces work “better together” – have proved extremely brittle.  The expense of the software and human resources to design, construct and maintain such systems has only made them useful to the largest of organizations.

A modern incarnation of RBA – “Orchestration” – is well positioned to take full advantage of the new totally virtualized cloud platforms, with visual tools that make the creation and maintenance of automated processes more intuitive, and translation of those high-level intentions to various APIs and endpoints effortless.  Not only that: Codified processes, or “orchestrations,” can be packaged up and shared in community, or sold as products themselves.  Thus, modern orchestration solutions unleash automation from the shackles of traditional RBA and make it available to the masses.  Since cloud computing is in effect “enterprise-grade data centers for the masses,” it is no surprise that orchestration will find its way in to that very same market alongside it.

Although orchestration is finally hitting its stride, uncontrolled automation is a recipe for disaster.  That’s where “policy” comes in….a topic for another day…

Posted in Cloud Computing Technology Insights | Tagged , | Comments Off on The Rebirth of Automation

Are End Users Too Stupid To Self-Serve Workloads?

In one of my former lives I founded a tech support team on a college campus.  Staffed with computer science students, the team was responsible for rolling out and supporting Internet and desktop computing technologies across a population of several thousand people.  As CS students, I suppose they were a bit of an elitist group.  They invariably faced users who were largely unfamiliar with these new tools, and were simply ignorant of how they worked, which generated support tickets.  When this happened, the team used a special support code to tag the incident:

Code PEBKAC = “Problem Exists Between Keyboard and Chair”

Not as often used, was:

Code ID-10-T = “Idiot”

Of course, over time they learned the technology and these smug little codes fell into disuse.

Today, I’m seeing a similar attitude with some of the IT support cultures in companies that are considering rolling out cloud technologies in their organizations. Specifically, the concept of self-service is drawing a considerable amount of snickering and eye-rolling from many an IT professional.  The idea that we would actually give a USER the ability to requisition SERVERS through a portal is laughable, and is immediately dismissed.  Why?

The reason invariably given: Users are too stupid to know when they need a server, and why, and they’d just end up creating a big mess that IT would have to clean up later.

But how much of that attitude is grounded in reality, and how much is based in the elitism which many IT organizations see being eroded by the user emancipation cloud and attending concepts like self-service bring?  I suspect the latter to be the case.

Keep in mind that “end user” is a broad term.  It could be the marketing professional needing a web site for a promotion, or a software dev who would normally have to wait months for a work order to complete before an environment is available to work in.  The point is: they are customers of the IT organization.

It can’t be denied that IT’s role in deploying workloads for the business is diminishing  Most users view the IT organization as a barrier to getting work done, not a helper.  When it can be sidestepped, it will be. Unlike those users many years ago who had never accessed the Internet before, and had never had a desktop PC all to themselves, today’s end users are computer-savvy, and self-service is something they understand from many contexts in their experience, from online shopping to subscribing to Internet services like Skype.  Doing it for workloads they need to achieve their business objectives is natural and inevitable.

So if you find yourself scoffing at the very idea that an end user should be allowed to (gasp!) self-serve and deploy their own web servers, etc., as needed, take a moment to search your soul: Is it because you REALLY don’t think they’re capable of making a good decision, or because you’re afraid they just won’t need you anymore?

 

Posted in Cloud Computing Technology Insights | Tagged , | Comments Off on Are End Users Too Stupid To Self-Serve Workloads?

Why We’ll Gladly Pay More to Compute in the Clouds

The idea that public cloud computing is cheaper than traditional forms of IT staging, such as on-premises data center and co-location, may have had legs in the early days of cloud’s buzz, but the truth has finally been widely recognized: cloud computing isn’t at face value cheap, or cheaper, than what we’ve been doing up until now.

If that’s the case, then why is cloud catching on?  Isn’t the ultimate goal to lower cost and boost the bottom line, regardless of one’s business?

The effects of moving to the cloud have a much broader impact than simply relocating one’s IT assets to an alternative hosting arrangement.  I think of it in much the same way that the value proposition of virtualization has been slowly, but fully realized over the years.

In its early days, x86 virtualization was seen purely as a means of improving hardware utilization by increasing the application-to-box ratio.  “Consolidation” was the easy win, and it appeared to lower IT expenses because it reduced the hardware budget.

But time revealed that virtualization has a hidden price associated with increased management cost, the most famous being “VM sprawl.”  But by the time this became apparent, the really valuable capabilities of virtualizaton began to take center stage: live migration, rapid deployment, portability of workloads, dynamic resource allocation. Roll them into one term, “flexibility”, and you see why there is more value to virtualization than just bottom line IT costs – it enables the overall business in new ways that were not possible or practical before.

Cloud computing is entering that phase of public awareness where its true benefits are being appreciated. The flexibility ascribed to virtualization also applies to cloud.  But there are others, such as: disaster recovery, capacity on demand, carrier-grade reliability, expense-structured payment, and global reach. The value of these are not reflected in the bottom-line cost of the monthly bill.  They are woven into the business’s new abilities related to agility and simplicity.

So the old rule holds true: only things that improve the bottom line will survive in the B2B marketplace. Cloud computing obeys that law by bringing new value to the table in ways not possible prior to its advent.  Although we’re still learning how to exploit these new capabilities, and to quantify their considerable untapped benefit, there’s no doubt the value is there, and worth the additional cost.

 

Posted in Cloud Computing Technology Insights, Uncategorized | Comments Off on Why We’ll Gladly Pay More to Compute in the Clouds

IaaS Exodus

I fear my thinking of late may be somewhat inconsistent: one part of my brain sees quite clearly that PaaS (platform as a service) is destined to win as the preferred method for developing and staging software in the future.  I firmly believe that.  At the same time, part of me has been assuming that the “migration to cloud” is simply swapping traditional OS/server infrastructure – which I call “legacy IaaS” – for private or public cloud IaaS.  And that, I believe, is not going to be the case in the limit.

Perhaps as a software guy working for a traditionally hardware company, I’ve allowed part of me to succumb to fallacious thinking: that people will always want to manage their own infrastructure – install their own operating system, configure it, and then stage software on top of that stack.  But my belief that PaaS wins is at complete odds with that thinking.

There’s no doubt that in the short term, consumers of IT will find it valuable to simply migrate their current legacy IaaS workloads, whether they are physical or virtual, from traditional data centers to IaaS cloud platforms.  It’s relatively straightforward and usually doesn’t require an application rewrite. The trend is accelerating and will continue for some time for various reasons: to control spending, be more agile in responding to market demands, or to take advantage of cloud-specific benefits such as better global reach or multiple staging points across geographies.  But these benefits are inherent to the fundamental nature of “cloud computing” and are not specific to IaaS.

It will become quite clear, quite soon, that IaaS is not the right delivery model on which to build end-user consumed software (SaaS).  Managing one’s own OS and run-time environment stack may give one a sense of control and security, but it isn’t scalable or competitive in the face of PaaS alternatives.  At some tipping point, migrations from traditional IaaS to cloud IaaS will be redirected to PaaS.  More than that, those who have migrated to cloud IaaS will make a final migration to PaaS, leaving IaaS behind for good.

This “IaaS exodus” is inevitable.  Whether it be on-premises or off, private or public, managed or unmanaged – Infrastructure as a Service can’t compete with Platform as a Service for application development and delivery.  Infrastructure will always be there, but it will be running the PaaS – not the apps.

 

Posted in Cloud Computing Technology Insights | Tagged , , , | Comments Off on IaaS Exodus

Why PaaS Wins

Working as a technology strategist for a large IT vendor, I’m tasked with thinking about technology from the customer perspective.  But often customers ask for things not because they want them, but because they have underlying needs they are trying to address in the best way they know how.  Their ask is the result of a sort of secondary effect of a larger issue.

I think IaaS and PaaS are good examples of an expressed want versus a solution that addresses the underlying need.  IaaS is of the former kind because it is an incremental enhancement to a now-outdated paradigm for implementing IT – the general purpose operating system (GPOS).

GPOSes grew out of an era when hardware was insanely expensive and isolated.  This necessitated the evolution of the lowly “monitor” program into an all-purpose middleware layer abstracting hardware into easier-to-program-for services, and securely and efficiently multiplexing those resources across as many concurrent processes as possible.  Although these advancements optimized resource utilization, it made application programming both easier and more difficult in various respects, and did not initially account for  internetworking on the scale of today’s Internet. 

We’ve since entered a different era of inexpensive, broadly distributed, well connected hardware.  Backing resources for compute, storage, and network can be so highly abstracted by this new “distributed operating system” that application development can be vastly simplified, and the staging and execution of those applications are no longer constrained by hardware boundaries.  That’s PaaS.

But making the jump from old to new, to take advantage of this new distributed world, requires rewriting “legacy” apps. And once that decision is made, one must determine in what language and using what platform –  there’s still looming uncertainty over which ones will survive the coming shakeout. It’s painful and doesn’t happen overnight, but that PaaS transition will happen in time, and here’s why…

IaaS is a hack that makes the transition less painful by providing a waypoint on the journey between old and new.  It’s the old GPOS wearing a fuel-guzzling rocket-pack, allowing legacy apps the ability to take to the clouds, expensively and clumsily.  It doesn’t solve the problem, but it gets you a few steps closer to the ideal. 

In the end, I believe IaaS becomes the ultimate salesman for PaaS, so I’m not opposed to promoting it as an intermediate solution.  As organizations move legacy apps to IaaS, they’ll reap only a small set of the benefits they could be enjoying if they go all in with PaaS. By the time that realization hits, the PaaS wars will be over, and the inertia to fully embrace it will be easily overcome.

Posted in Cloud Computing Technology Insights | Tagged , , | Comments Off on Why PaaS Wins

When Will My Data Center Grow Up To Be a “Private Cloud?”

In continuing to further the notion that hybrid cloud deployments consist of any two or more clouds (public/private, on/off-premises, single/multi-tenant), I’ve heard a few voices raised in objection.  The argument goes like this:

  1. A “cloud”  is a compute infrastructure with characteristics of elasticity, resource sharing, rapid deployment, end user empowerment, and many others, depending on whom you ask.
  2. My organization’s data center is, by definition, not a cloud because it doesn’t exhibit all of those “cloudy” characteristics.
  3. Yet, I am in the process of connecting my data center to the public cloud in various ways, even if it is simple SaaS integrations.
  4. Therefore, I am doing hybrid.
  5. Therefore, hybrid must be something other than two or more clouds working together.

At this point there is a smug widening of the eyes and nostrils at having made such a Spock-like argument worthy of causing malicious androids to self-destruct.

My response? I have several:

  1. Your organization’s data center is on a journey to becoming more cloud-like every day. It may lack some of the attributes at the moment, but it will be more capable in the future.  I suspect virtualization, for example, was not in your data center 10 years ago, but it is today. Rapid deployment is a natural outgrowth of that. End-user self-service is just a matter of time.
  2. Terms like hybrid cloud and even cloud computing are the latest fashionable ones we use to talk about distributed computing technology.  Look at the literature from 30 years ago: Researchers envisioned a “distributed operating system” that had characteristics of location transparency, distributed resource sharing, resource pooling, etc..  We’re still on that journey, and some of the key technologies to enable the fulfillment of that dream have only recently appeared and are rapidly transforming IT.
  3. Hybrid is important because of the “two-or-more” aspect of the definition.  The fact that those endpoints are also “clouds” is not nearly as important.  The degree of “cloudiness” is sure to vary among them. Furthermore, there may be much more profound differences, such as service model (IaaS/PaaS/SaaS) – none of which should exclude them from participation in a hybrid composite cloud.

Perhaps you can construct other comebacks, but the main point is this:

It is more important that we develop flexible hybrid technologies that can connect any two compute endpoints on the Internet, regardless of proximity, tenancy, ownership, management model, or degree of “cloudiness.”  The goal is to build integrated systems that deliver on the full promise of distributed computing in-the-large.

If we begin to think of hybrid as “private-to-public” or “not-a-cloud -to-a-cloud” we may make assumptions which limit the application of those technologies in other settings where the assumptions do not hold.

 

Posted in Cloud Computing Technology Insights | Tagged , , , | Comments Off on When Will My Data Center Grow Up To Be a “Private Cloud?”

The “Who” Attributes of Public and Private Cloud Deployments

As we continue to deal with the ambiguity of “public” and “private” definitions for cloud deployments, it occurs to me that most of the unknowns created in that ambiguity can be determined by asking “Who?” questions.  At first, this may seem trivial since “public” and “private” naturally lead to the questions “Who can use this cloud?”  But it isn’t quite that simple.

In a previous post I argued that “public” and “private” are overloaded in industry parlance, and that one must ask more specific questions of proximity, ownership, management, and tenancy to get a precise understanding of what someone means when they say “public” or “private” to define a cloud. At the risk of appearing obsessed with defining these terms, I’m augmenting that and adding a fifth question of “scope” which captures the NIST’s intended meaning for “public” and “private.” These five questions are all “Who” questions, and the answer may differ depending on the cloud. To be specific:

  • Proximity – “On- or Off-premises”
    On whose premises is the equipment backing the service installed?
  • Ownership – “User or Other”
    Who paid for the equipment and continues to pay for its upkeep?
  • Management – “Self or Outsourced”
    Who is responsible for making sure it is operational?
  • Tenancy – “Single- or Multi-tenant”
    Of those using this cloud, who else can I potentially “see” when I am using it?
  • Scope – “Public or Private”
    Who outside of my organization or company is allowed to also use it?

And why is it important to be so precise in defining these terms and questions?  Because the success of hybrid cloud deployments depends upon them.  One must create connections that traverse ownership boundaries in order to compose multiple clouds into cohesive functioning distributed systems. The nature of the relationship (spatial, legal, business affinities, etc.) between owners of the different attributes has a direct effect on the problems that will be encountered in crossing that boundary, and the solutions required to overcome them.  Some are fairly obvious, such as going from on-premises to off-premises requires WAN links and optimizations.  Others, such as joining a single-tenant to a multi-tenant may raise compliance issues when single-tenant assumptions no longer hold in the other cloud.

Posted in Cloud Computing Technology Insights, Uncategorized | Tagged , , , | Comments Off on The “Who” Attributes of Public and Private Cloud Deployments