by Harsh Karmarkar, Director, Solutions Consultants, Alliances & Channels at Apcera
Enterprises today are looking to hybrid cloud to achieve a range of goals: to cut costs; enable a more flexible workforce; offer better customer service; and achieve greater scale. But in an era of “Big Data”, escalating security concerns, and an ever more fragmented set of technology functions being moved to the cloud, fulfilling those goals requires an IT management approach that offers holistic visibility into all resources being used, both on- and off-premise—and a way to consistently govern their use.
This is where policy comes in. But all too often, enterprises are trying to apply traditional, domain-specific policy approaches to a hybrid IT landscape that is defined by an inordinate amount of complexity—and which stubbornly resists being tamed by cobbled-together point solutions.
Defining Policy
Policy in general has been a catch-all term, and one with many definitions. Policy is seen as covering everything from defining explicit corporate rules for employee interactions with customers, to ensuring compliance with legal and regulatory constraints like HIPAA, to defining basic firewall rules—and everything in-between.As a result, traditional policy approaches unfortunately often define rules through sweeping documents, or are specific to a granular domain. So what happens is that every network utility and access control system, and every subsystem and service, has its own set of implemented policies that may or may not conform to the overarching enterprise business goals, and which may not talk to each other, let alone be holistically manageable by IT.
Fortunately, there is an increasing acceptance of the fact that existing policy approaches are inadequate, and that the more there can be a consistent language and construct for identity and location of both users and data, the easier it is to have an overarching view of what the data is, where it is and who’s looking at.
Ultimately, the goal of any holistic policy approach must be to implement the rules that have been put in place by the business itself and by regulatory bodies for governing data and data access. That policy engine should use an automated rules framework to fulfill those rules, by allowing or disallowing any given action at any given time by any given employee on any given system, across both cloud-based and on-prem IT environments.
Challenges of a Hybrid Architecture
The hybrid cloud has mostly been the playground of development and testing—but that’s beginning to change. Now that many enterprises are moving into hybrid production environments, and the data perimeter is being extended into third-party domains, there’s concern about how to understand what’s being housed where, how to safeguard sensitive data and how to maintain performance in a complex environment, without adding overhead to the process.Generally speaking, enterprises have been taking two approaches for managing the hybrid IT environment.
For one, IT can clearly separate within their management systems what’s on-premise and what’s not, and build a few links to gain elastic scale and the ability to move workloads around. But more often than not, there’s no unified tooling or governance, so it results in two management structures, a lack of overall visibility and unevenly applied operational rules.
The other approach is to treat all of the data and functions as though they were on-premise. But that gives administrators less control over the remote environment, and if organizations have sensitive information housed remotely, data sovereignty issues can arise.
Policy in both cases can lend value.
Best Practices for Hybrid Implementations
A key place to begin building a policy framework is to ask what, ultimately, are the business goals that policy should enable? Is it effective resource allocation? Is it achieving certain performance or SLA-related benchmarks? Does the enterprise need a geographic view of, say, software licensing term compliance? Is cyber security the main focus? Or is it all of the above and more?From there, the policy engine must have a grammar that dovetails with the business’ operational language. For instance, an enterprise may define security levels by color. But to third parties, what’s contained in, say, the purple or orange zones is completely unfamiliar. So policy engines for hybrid architectures have to map how enterprises internally view their assets and information to any third-party widely accepted language and processes.
Today’s approaches are also often defined by what employees can’t do. But that blacklist approach is not very extensible in terms of adapting to evolving enterprise realities. For instance, accessing social media may have been a prohibited activity two years ago—but now tweeting and updating Facebook may be critical for an employee to do his or her job.
IT administrators can instead take a white list approach, which explicitly allows each and every approved activity. This ensures that people are only performing actions that IT understands and can manage. Often, the evaluation of one policy rule drives the next policy decision within the situation’s specific context. So, the idea of identity—a sense of who has the right to do what—becomes critically important.
Approaching policy this way may take a bit more time up front to set up, but it helps optimize the IT environment in the long run.
Another basic implementation issue has to do with how policy is enforced. Many enterprises use a centralized engine that evaluates policy compliance, which is then enforced in a distributed way, out in a remote cluster. But whenever there is distributed enforcement and centralized evaluation, it allows for gaps in rules application and inconsistencies.
A better approach is to ensure that every actor within the system is governed locally by the set of policies that can specifically affect him or her. So, the policy engine for both the evaluation and enforcement of compliance is distributed to all of the agents in the system, both in on-premise and remote environments.
That means that there’s no queue for a central engine to make decisions. So whether the infrastructure has 10 actors or 10,000, scaling doesn’t result in a bigger drain on the central IT management structure.
This type of implementation is a fundamentally different approach to architecting the policy brain than what we typically see emerging in the hybrid cloud. But for forward-thinking enterprises, taking steps now to accommodate the complexities of unstructured data, multiple user types and a hodgepodge of domains will give them the ability to programmatically control what an app or workload does, without requiring the IT staff to write code or resort to other manual practices. Thus, they will find themselves delivering better customer service, driving efficiencies and safeguarding operations across the board, for now and in the future.
About the Author
Harsh Karmarkar leads the Alliances pre-sales team for Apcera.
About Apcera
Based in San Francisco, California, Apcera has deployed the world's first policy-driven platform for global 2000 companies. Continuum, Apcera's flagship product is a PaaS++ that deploys, orchestrates and governs a diverse set of workloads, on premise and in the cloud. In September 2014, Ericsson purchased majority interest in Apcera, though Apcera remains an independent company.