Thursday, April 7, 2016

Blueprint: Top 10 Best Practices for Planning and Conducting an Endpoint PoC

by Paul Morville, Founder and VP of Products, Confer

Few things are more disappointing or costly than deploying a product that fails to live up to the vendor’s claims or doesn’t meet the team’s expectations. More often than not, there is a very large grey area where it’s difficult to discern what the PowerPoint slides promise versus what the product will actually deliver. A well-structured Proof of Concept (PoC) can be extremely useful in turning this grey area into black and white. But, these PoCs can be complicated and costly to run, sapping security operations center and security analyst resources that are already spread too thin.

For endpoint security, planning and conducting a good POC is even more important than usual because security’s reputation is on the line. While improving endpoint security is essential in today’s environment, endpoint deployments can be risky. They are highly visible across the company and a failed deployment will get the security team into hot water with their end users.

By designing a solid and comprehensive PoC, you can vastly improve your chances of managing the gaggle of vendors vying for your business, make the right decision and ultimately, ensure a smooth rollout and a successful project.

Our Top 10 Do’s and Don’ts:

1: Don’t delegate the scoping and planning process

Senior security team members are typically at maximum capacity, so it’s tempting to delegate the task of planning a PoC to a more junior staff member. Don’t. The PoC is the chance to define what the organization wants from an endpoint security solution in terms of technical, operational and business requirements. In forward-thinking organizations, an experienced CISO is engaged in the upfront planning to ensure the requirements are well-defined.

2: Do ask yourself, “Will it flatten the stack?”

When testing a product, ask yourself whether it will help you flatten the endpoint security stack, thereby reducing management cost and complexity. How many items can you check off on your requirement list? How many endpoint agents can you retire?

The PoC should thoroughly evaluate every function the product claims to offer. For example, if the product blocks attacks – what kind? If the product supports incident response, does it give full visibility into the details and impact on the endpoint?

3: Do adopt the mindset of the adversary

The PoC test should serve as a proxy for the determined adversaries the organization faces. By adopting the mindset of the adversary, the CISO can emulate the types of attacker behaviors they are likely to face.

Skilled attackers can easily penetrate most networks, so the test scenarios should not focus solely on breach prevention. It’s also critical to evaluate the level of damage the attackers can do once they are inside the network, and how readily their behavior can be detected and thwarted.

4: Do form Red and Blue Teams

Conducting a PoC that most accurately reflects a real-world scenario in a specific organization requires selecting members of the security staff to mimic the attackers who are constantly trying to compromise employees’ devices and steal valuable data. These employees become the Red Team. On the flip side, staff members chosen to mimic the defenders, those who work to mitigate all threats facing the organization, become the Blue Team. If everyone knows their roles, the PoC will be as close to reality as possible.

5: Do allow those teams to work together

Often, the Red Team launches an attack and then, a month later, writes a report that says, “We got in, and here are the vulnerabilities we found.” The PoC will be far more useful if one or two key members of the Blue Team are sitting alongside the Red Team and interacting with them. The Blue team can watch how an attack unfolds, analyze how the defenses react, and evaluate what kind of information is generated by the product being tested. In turn, this gives them a better sense of how the product can actually be used, and how it will perform in a real-world environment.

6: Do testing in both the lab and the real world

A typical medium enterprise will have over 5 million executables in their environment and will see upwards of 5,000 new executables enter the environment every day. Every one of these executables has the potential to generate a false positive, but that’s impossible to simulate in a lab. Therefore, a well-designed PoC will strike a balance between bench-testing live malware in a virtual-lab setting, and testing a subset of the real-world production environment under the conditions of an actual attack. An effective PoC should include deployment on at least 20 devices from the general population to provide the real world perspective.

7: Do use a representative set of attacks

Organizations are most likely to be attacked by the same actors who have attacked them in the past, using methods that were previously successful. The goal, therefore, is not to test against the most obscure or exotic malware, but rather to focus on threats the organization has already faced. Maintaining a repository of malware samples from past incidents is a good start. Also, include malwareless attacks – such as document-based or PowerShell scripts. They are common in today’s enterprise and just as damaging as a malware-based attack.

8: Don’t blindly accept tests from your vendors

If a CISO relies on the vendor to provide malware test samples, it will be very important to ask questions about how those samples were derived.  Vendors sometimes skew PoC results by repackaging known malware so it evades their competitors’ products, but is detected by their own engine (not a big surprise, since they generated it.) Ask questions and use a mixture of sources.

9: Don’t test malware on a live network

At the risk of stating the obvious, it is never wise to test live malware in a production environment. Inexperienced security personnel have actually done this, triggering a full-scale outbreak in the environment. For live malware testing, the best case is to use a segregated network consisting of virtual machines that are immediately reimaged after infection so as to avoid an actual attack.

10: Don’t test on a suspect endpoint

When conducting a PoC, it can be tempting to “kill two birds with one stone” by including real devices that are suspected of already having been compromised. This approach is not advised because it presents an incomplete picture. If the attacker has already come and gone, you often have very little to go on. Unless you plan to install the product exclusively post-incident, try to simulate the whole attack lifecycle.

Following these 10 best practices will help test how well a product addresses specific endpoint security requirements in the only environment that truly matters – yours.

About the Author

Paul Morville has been working in information security for more than 15 years. Prior to founding Confer, Paul held numerous roles at Arbor Networks, including VP Product Management and VP Corporate Business Development. Paul was an early employee at Arbor and helped take the company from pre-revenue to more than $100M in annual sales, establishing it as the leader in network security DDoS detection and prevention.

While there, Paul developed and launched Arbor’s flagship enterprise network security product line, established partnerships with ISS/IBM, Cisco and Alcatel-Lucent; managed Arbor’s Security Engineering & Response research team; acquired a company; and ultimately managed Arbor’s sale to Danaher Corporation in 2010.

Prior to entering the security industry, Paul worked for several other startups. He holds an MBA with Distinction from Michigan’s Ross School of Business.

About Confer

Confer offers a fundamentally different approach to endpoint security through a Converged Endpoint Security Platform, an adaptive defense that integrates prevention, detection and incident response for endpoints, servers and cloud workloads. The patented technology disrupts most attacks while collecting a rich history of endpoint behavior to support post-incident response and remediation. Confer automates this approach to secure millions of devices, regardless of where they are, allowing security teams to focus on more important activities.

Rackspace Offers Hosted OpenStack Private Clouds

Rackspace is now its fully-managed OpenStack services in any data center -- including private enterprise data center, a third party data centers of the customer's choosing, a Rackspace-supported third party colocation facility or a Rackspace data center.

Rackspace will fully manage the underlying OpenStack software and hardware, including all compute, network and storage. The company promises "Fanatical Support."

The company said this new approach enables customers to run OpenStack private clouds without the high cost, risk and operational burden of doing it themselves. And companies can free up money and resources by moving their IT infrastructure from a capital expense to an operating expense model.

“Companies realize they can free up money and resources for more strategic business investments when they turn their IT capital expenses into operating expenses,” said Darrin Hanson, GM and VP of OpenStack Private Cloud at Rackspace. “When OpenStack is consumed as a managed service, businesses can remove non-core operations, reduce software licensing, and minimize infrastructure acquisition and IT operations costs.”

http://www.rackspace.com

Unwired Planet to Sell Patent and Trademark Assets

Unwired Planet, an intellectual property company focused exclusively on the mobile industry, will sell its  patent and trademark assets to Optis UP Holdings for $30 million in cash and up to an additional $10 million in cash on the second anniversary of the closing of the transactions.

Unwired Plantet claims approximately 2,500 issued and pending US and foreign patents, includes technologies that allow mobile devices to connect to the Internet and enable mobile communications. The portfolio includes patents related to key mobile technologies, including baseband mobile communications, mobile browsers, mobile advertising, push notification technology, maps and location based services, mobile application stores, social networking, mobile gaming, and mobile search.

http://www.unwiredplanet.com/

Intel Acquires YOGITECH for ADAS

Intel is acquiring YOGITECH S.p.A., which specializes in semiconductor functional safety and related standards. Financial terms were not disclosed.

YOGITECH's work focuses on functional safety (including Advanced Driver Assistance Systems or ADAS) of transportation and factory systems. One of the fastest-growing segments in automotive electronics, ADAS makes features like assisted parking possible and paves the way for fully autonomous vehicles in the not-so-distant future.

The YOGITECH team, based in Italy, will join Intel’s Internet of Things Group.

https://newsroom.intel.com/editorials/blog-intel-acquires-yogitech-for-iot-functional-safety/

Electric Imp Raises $21 Million for IoT Platform

Electric Imp, a start-up based in Los Altos, California with offices in Cambridge, UK, raised $21 million in Series C funding for its IoT platform that securely connects devices to advanced cloud computing resources.

Electric Imp's solution includes fully integrated hardware, OS, security, APIs and cloud services.

London-based Rampart Capital led the funding round alongside company insiders and returning venture capital firm Redpoint Ventures. This brings total funding to $43 million.

"This funding is a natural step in Electric Imp’s ongoing expansion and validates our approach with large commercial and industrial customers including Pitney Bowes and other yet to be announced global enterprises,” said Hugo Fiennes, CEO and co-founder of Electric Imp. "Our company is strategically positioned to maximize the potential of our industry-leading technology platform where proven security and scalability are critical to commercial and industrial enterprises.

“In 2014, we proved the reliability and usability of our scalable platform in the consumer market, and partnered with Murata to design and build our hardware modules, enabling our customers to connect their devices quickly, easily, and securely,” continued Fiennes. “In 2015, we launched our enterprise cloud offerings, which allow customers to build on top of our class-leading platform, accelerating their company-wide IoT strategies. Our continued focus on enterprise services has helped us with key customer wins, and has enabled our customers to get their devices connected in record time without sacrificing security.”

https://electricimp.com

Puppet Refreshes its Brand

Puppet Labs officially shortened its name to "Puppet" as part a corporate rebranding aimed at the $200 billion software infrastructure market that is emerging as a result of mass migration to the cloud.

“Software powers everything around us, from the devices on our wrists and our walls to the work we do, the fun we have, and everything in between. Modern cars are powered by millions of lines of code, our financial world is entirely mediated by software to enable speed and throughput, and it’s critical to delivery of core functions like medicine, utilities, and food. Nevertheless, most businesses take weeks, months and even years to deliver everything from simple upgrades to the latest innovations, and too much of this software is out of date, insecure, and thus a barrier to progress rather than an enabler of it,” said Luke Kanies, Puppet founder and CEO.

Puppet also announced today new leadership, product updates, integrations, resources and branding.

Sanjay Mirchandani was named president and COO -- the first executive to hold this position at Puppet. He previously served as a senior vice president of VMware.

Project Blueshift and Puppet Enterprise 2016.1 – Blueshift represents Puppet's engagement with leading-edge technologies and their communities — technologies like Docker, Mesos and Kubernetes — and Puppet's commitment to giving organizations the tools to build and operate constantly modern software. The new Puppet Enterprise 2016.1 gives customers direct control of — and real-time visibility into — the changes they need to push out, whether to an app running in a Kubernetes cluster or a fleet of VMs running in AWS. For complete details, read our press release.

Atlassian HipChat integration – This new integration makes it possible for DevOps teams to direct change with the Puppet Orchestrator, see change as it occurs, then discuss and collaborate on changes in process — all right in HipChat. For complete details, read our press release.

Splunk integration – Proactive monitoring of infrastructure and applications is a key DevOps practice, enabling continuous improvement. The Puppet Enterprise App for Splunk now extends the Splunk platform to Puppet customers to diagnose issues and solve problems faster, so they can deploy critical changes with confidence. For complete details, read our press release.

https://puppet.com

Molex Acquires Interconnect Systems

Molex has acquired Interconnect Systems, which specializes in the design and manufacture of high density silicon packaging with advanced interconnect technologies.

Interconnect Systems, which is based in Camarillo, California, delivers advanced packaging and interconnect solutions to top-tier OEMs in a wide range of industries and technology markets, including aerospace & defense, industrial, data storage and networking, telecom, and high performance computing.

Molex said the acquisition enables it to offer a wider range of fully integrated solutions to customers worldwide.

“We are thrilled to join forces with Molex. By combining respective strengths and leveraging their global manufacturing footprint, we can more efficiently and effectively provide customers with advanced technology platforms and top-notch support services, while scaling up to higher volume production,” said Bill Miller, president, ISI.

http://www.molex.com