Endpoints Facing A New Era With Illumination – Charles Leaver

Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver

The dissolving of the standard boundary is taking place quickly. So what happens to the endpoint?

Financial investment in border security, as defined by firewall programs, managed gateways and intrusion detection/prevention systems (IDS/IPS), is changing. Investments are being questioned, with returns not able to overcome the costs and complexity to produce, maintain, and validate these antiquated defenses.

More than that, the paradigm has changed – employees are no longer exclusively working in the workplace. Many individuals are logging time from home or while traveling – neither location is under the umbrella of a firewall system. Instead of keeping the bad guys out, firewalls often have the inverse result – they prevent the authorized people from being efficient. The irony? They produce a safe haven for hackers to breach and conceal for many weeks, then traverse to vital systems.

So Exactly what Has Changed A lot?

The endpoint has actually become the last line of defense. With the above mentioned failure in border defense and a “mobile everywhere” workforce, we need to now enforce trust at the endpoint. Easier stated than done, nevertheless.

In the endpoint space, identity & access management (IAM) tools are not the silver bullet. Even innovative companies like Okta, OneLogin, and cloud proxy suppliers such as Blue Coat and Zscaler can not overcome one simple truth: trust goes beyond simple identification, authentication, and authorization.

File encryption is a 2nd effort at safeguarding entire libraries and specific assets. In the most recent (2016) Ponemon study on data breaches, encryption only conserved 10% of the cost per breached record (from $158 to $142). This isn’t the remedy that some make it appear.

Everything is changing.

Organizations needs to be prepared to welcome new paradigms and attack vectors. While organizations need to supply access to trusted groups and people, they have to address this in a better way.

Crucial business systems are now accessed from anywhere, whenever, not just from desks in business office buildings. And professionals (contingent workforce) are quickly making up over half of the overall business workforce.

On endpoint devices, the binary is primarily the issue. Probably benign events, such as an executable crash, could suggest something simple – like Windows 10 Desktop Manager (DWM) rebooting. Or it might be a much deeper issue, such as a destructive file or early signs of an attack.

Trusted access doesn’t solve this vulnerability. In accordance with the Ponemon Institute, between 70% and 90% of all attacks are caused by human error, social engineering, or other human aspects. This needs more than simple IAM – it needs behavioral analysis.

Rather than making good much better, perimeter and identity access companies made bad quicker.

When and Where Does the Good Part of the Story Begin?

Taking a step back, Google (Alphabet Corp) revealed a perimeter-less network design in late 2014, and has made considerable progress. Other enterprises – from corporations to federal governments – have actually done this (in silence and less extremely), but BeyondCorp has done this and shown its solution to the world. The design approach, endpoint plus (public) cloud displacing cloistered business network, is the essential concept.

This alters the entire conversation on an endpoint – be it a laptop, desktop, workstation, or server – as subservient to the corporate/enterprise/private/ organization network. The endpoint truly is the last line of defense, and needs to be secured – yet likewise report its activity.

Unlike the standard boundary security model, BeyondCorp doesn’t gate access to tools and services based upon a user’s physical place or the stemming network; instead, access policies are based upon information about a device, its state, and its associated user. BeyondCorp considers both external networks and internal networks to be completely untrusted, and gates access to applications by dynamically asserting and implementing levels, or “tiers,” of access.

By itself, this seems innocuous. But the reality is that this is an extreme new design which is imperfect. The access criteria have actually moved from network addresses to device trust levels, and the network is greatly segmented by VLAN’s, instead of a centralized model with potential for breaches, hacking, and hazards at the human level (the “soft chewy center”).

The good part of the story? Breaching the border is very challenging for prospective cyber attackers, while making network pivoting almost impossible when past the reverse proxy (a typical mechanism used by attackers today – proving that firewalls do a better job of keeping the bad guys in rather than letting the good guys go out). The inverse model further applies to Google cloud servers, probably securely managed, inside the perimeter, versus client endpoints, who are all out in the wild.

Google has actually done some good refinements on proven security approaches, especially to 802.1 X and Radius, bundled it as the BeyondCorp architecture, including strong identity and access management (IAM).

Why is this important? What are the gaps?

Ziften believes in this technique due to the fact that it emphasizes device trust over network trust. Nevertheless, Google doesn’t particularly show a device security agent or stress any form of client-side tracking (apart from very strict setup control). While there may be reporting and forensics, this is something which every company should be knowledgeable about, because it’s a matter of when – not if – bad things will happen.

Considering that carrying out the preliminary stages of the Device Inventory Service, we’ve consumed billions of deltas from over 15 data sources, at a common rate of about three million per day, totaling over 80 terabytes. Keeping historic data is essential in enabling us to comprehend the end-to-end life cycle of a given device, track and examine fleet-wide trends, and perform security audits and forensic investigations.

This is a costly and data-heavy procedure with 2 drawbacks. On ultra-high-speed networks (utilized by organizations such as Google, universities and research companies), ample bandwidth enables this type of communication to occur without flooding the pipes. The very first issue is that in more pedestrian business and government scenarios, this would trigger high user disturbance.

Second, machines must have the horse power to continuously collect and transfer data. While the majority of staff members would be delighted to have existing developer-class workstations at their disposal, the cost of the devices and process of revitalizing them regularly makes this excessive.

An Absence of Lateral Visibility

Very few products actually generate ‘improved’ netflow, enhancing traditional network visibility with abundant, contextual data.

Ziften’s patented ZFlow ™ offers network flow details on data produced from the endpoint, otherwise achieved using brute force (human labor) or expensive network devices.

ZFlow serves as a “connective tissue” of sorts, which extends and completes the end-to-end network visibility cycle, adding context to on-network, off-network and cloud servers/endpoints, allowing security groups to make quicker and more educated and precise decisions. In essence, buying Ziften services result in a labor cost saving, plus a boost in speed-to-discovery and time-to-remediation due to technology functioning as a replacement for human resources.

For companies moving/migrating to the cloud (as 56% are preparing to do by 2021 according to IDG Enterprise’s 2015 Cloud Survey), Ziften uses unequaled visibility into cloud servers to better monitor and secure the complete infrastructure.

In Google’s environment, just corporate-owned devices (COPE) are enabled, while crowding out bring-your-own-device (BYOD). This works for a business like Google that can distribute new devices to all personnel – smart phone, tablet, laptop computer, and so on. Part of the reason for that is the vesting of identity in the device itself, plus user authentication as usual. The device must satisfy Google requirements, having either a TPM or a software application equivalent of a TPM, to hold the X. 509 cert utilized to validate device identity and to assist in device-specific traffic file encryption. There should be several agents on each endpoint to validate the device validation asserts called out in the access policy, which is where Ziften would need to partner with the systems management agent company, given that it is likely that agent cooperation is vital to the process.

Summary

In summary, Google has established a world-class option, however its applicability and usefulness is restricted to organizations like Alphabet.

Ziften uses the same level of operational visibility and security defense to the masses, utilizing a light-weight agent, metadata/network flow monitoring (from the endpoint), and a best-in-class console. For organizations with specialized needs or incumbent tools, Ziften provides both an open REST API and an extension framework (to augment ingestion of data and activating response actions).

This yields the benefits of the BeyondCorp model to the masses, while protecting network bandwidth and endpoint (machine) computing resources. As organizations will be sluggish to move completely away from the business network, Ziften partners with firewall and SIEM suppliers.

Lastly, the security landscape is progressively moving to managed detection & response (MDR). Managed security providers (MSSP’s) offer standard tracking and management of firewall software, gateways and perimeter invasion detection, but this is inadequate. They do not have the skills and the technology.

Ziften’s solution has actually been evaluated, integrated, authorized and executed by a number of the emerging MDR’s, illustrating the standardization (ability) and versatility of the Ziften platform to play a crucial role in removal and event response.