Charles Leaver – This Is Why Edit Difference Is Used In Detection Part 2

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we took a look at searching for destructive executables with edit distance (i.e., the number of character edits it requires to make two text strings match). Now let’s look at how we can utilize edit distance to search for malicious domains, and how we can utilize edit distance features that can be combined with other domain features to identify suspicious activity.

Here is the Background

What are bad actors playing at with malicious domains? It could be merely using a close spelling of a common domain to trick careless users into looking at advertisements or picking up adware. Genuine sites are gradually catching onto this technique, sometimes called typo squatting.

Other destructive domains are the result of domain generation algorithms, which could be utilized to do all types of wicked things like avert countermeasures that block known compromised sites, or overwhelm domain name servers in a distributed DoS attack. Older variations utilize randomly generated strings, while more advanced ones add techniques like injecting typical words, further confusing defenders.

Edit distance can assist with both usage cases: let’s see how. First, we’ll omit common domains, considering that these are generally safe. And, a list of typical domains supplies a baseline for detecting anomalies. One excellent source is Quantcast. For this discussion, we will adhere to domain names and prevent sub-domains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its possible next-door neighbors in the very same top-level domain (the tail end of a domain name – classically.com,. org, etc. and now can be almost anything). The standard task is to find the nearest next-door neighbor in terms of edit distance. By discovering domain names that are one step away from their nearby next-door neighbor, we can quickly spot typo-ed domain names. By finding domain names far from their next-door neighbor (the normalized edit distance we introduced in Part 1 is beneficial here), we can also find anomalous domain names in the edit distance area.

What were the Results?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domains since they might contain malicious material!

Here are a few possible typos. Typo squatters target well known domains considering that there are more opportunities somebody will visit. Several of these are suspicious according to our danger feed partners, but there are some false positives as well with charming names like “wikipedal”.

Here are some weird looking domain names far from their neighbors.

So now we have produced 2 beneficial edit distance metrics for hunting. Not just that, we have three functions to potentially add to a machine learning design: rank of nearest next-door neighbor, range from next-door neighbor, and edit distance 1 from neighbor, indicating a danger of typo shenanigans. Other features that might be utilized well with these are other lexical features such as word and n-gram distributions, entropy, and the length of the string – and network functions like the number of unsuccessful DNS demands.

Simple Code that you can Play Around with

Here is a simplified version of the code to play with! Developed on HP Vertica, however this SQL ought to work on most innovative databases. Keep in mind the Vertica editDistance function may vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

Charles Leaver – Manage Your Environment Properly Or Suffer Security Problems And Vice Versa

Written by Charles Leaver Ziften CEO

 

If your business computing environment is not correctly managed there is no chance that it can be totally protected. And you can’t efficiently manage those complex business systems unless there’s a strong feeling that they are protected.

Some might call this a chicken-and-egg situation, where you have no idea where to begin. Should you begin with security? Or should you start with the management of your system? That’s the incorrect technique. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate first. It’s not peanut butter initially. Instead, both are mixed together – and treated as a single scrumptious treat.

Many organizations, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group do not know each other, speak to each other just when absolutely essential, have unique budget plans, definitely have separate concerns, check out different reports, and make use of various management platforms. On a day-to-day basis, what makes up a job, an issue or an alert for one team flies completely under the other team’s radar.

That’s bad, because both the IT and security groups need to make assumptions. The IT team believes that everything is secure, unless someone notifies them otherwise. For instance, they assume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have been used, and so on

Since the CIO and CISO teams aren’t talking with each other, do not comprehend each others’ functions and priorities, and aren’t using the exact same tools, those assumptions might not be appropriate.

And again, you cannot have a secure environment unless that environment is properly managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An unsecure environment makes anything you do in the IT organization suspect and irrelevant, and indicates that you can’t know whether the details you are seeing are correct or manipulated. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds easy however it can be challenging: Make sure that there is an umbrella covering both the IT and security teams. Both IT and security report to the very same individual or organization somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business doesn’t have a secure environment, and there’s a breach, the value of the brand and the company can be reduced to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the business cannot work efficiently, and the value drops. As we have actually talked about, if it’s not well managed, it can’t be secured, and if it’s not protected, it cannot be well handled.

The fiduciary responsibility of senior executives (like the CFO) is to secure the worth of organizational assets, and that indicates making sure IT and security talk to each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and shown to be significant to their specific areas of duty.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, created similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT teams exactly what they need to do their jobs, and gives security groups what they require as well – without coverage spaces that might undermine assumptions about the state of enterprise security and IT management.

We need to guarantee that our organization’s IT infrastructure is developed on a safe and secure structure – and that our security is executed on a well managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.

Charles Leaver – Remote Connections Are Increasing So Constant Visibility Of The Endpoint Is Crucial

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A survey just recently finished by Gallup found that 43% of Americans that were in employment worked from another location for a few of their work time in 2016. Gallup, who has actually been surveying telecommuting patterns in the USA for practically a 10 years, continues to see more staff members working outside of conventional workplaces and an increasing number of them doing so for a greater number of days from the week. And, naturally the variety of connected devices that the average employee uses has actually jumped too, which assists drive the benefit and preference of working far from the workplace.

This freedom certainly makes for better employees, and it is hoped more productive employees, however the issues that these patterns present for both security and systems operations groups ought to not be dismissed. IT asset discovery, IT systems management, and threat detection and response functions all benefit from real-time and historic visibility into user, device, application, and network connection activity. And to be truly reliable, endpoint visibility and monitoring should work no matter where the user and device are operating, be it on the network (regional), off the network however linked (remotely), or disconnected (not online). Current remote working patterns are significantly leaving security and functional groups blind to potential issues and risks.

The mainstreaming of these patterns makes it a lot more difficult for IT and security groups to limit what was before deemed greater risk user behavior, such as working from a coffeehouse. But that ship has actually sailed and today systems management and security teams need to be able to thoroughly track user, device, application, and network activity, find abnormalities and unsuitable actions, and impose suitable action or fixes regardless of whether an endpoint is locally linked, remotely connected, or disconnected.

In addition, the fact that numerous employees now routinely access cloud-based applications and assets, and have back-up USB or network connected storage (NAS) drives at their homes additionally amplifies the need for endpoint visibility. Endpoint controls often supply the one and only record of remote activity that no longer always ends in the corporate network. Offline activity presents the most extreme example of the need for constant endpoint monitoring. Clearly network controls or network tracking are of little use when a device is running offline. The setup of a proper endpoint agent is crucial to guarantee the capture of all important security and system data.

As an example of the kinds of offline activity that could be found, a client was recently able to track, flag, and report unusual behavior on a business laptop. A high level executive moved huge quantities of endpoint data to an unauthorized USB drive while the device was offline. Due to the fact that the endpoint agent had the ability to collect this behavioral data during this offline duration, the customer had the ability to see this unusual action and follow up properly. Continuing to monitor the device, applications, and user habits even when the endpoint was disconnected, gave the client visibility they never had previously.

Does your organization have continuous tracking and visibility when employee endpoints are not connected? If so, how do you achieve this?

Charles Leaver – When Machine Learning Is Introduced These Consequences Will Follow

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you are a student of history you will observe numerous examples of serious unexpected consequences when new technology has actually been presented. It typically surprises individuals that new technologies might have dubious intentions in addition to the positive purposes for which they are brought to market but it takes place all the time.

For instance, Train robbers using dynamite (“You believe you utilized enough Dynamite there, Butch?”) or spammers using email. Just recently the use of SSL to conceal malware from security controls has actually ended up being more common just because the genuine use of SSL has actually made this technique better.

Since new technology is typically appropriated by bad actors, we have no need to believe this will not hold true about the new generation of machine learning tools that have actually reached the market.

To what degree will there be misuse of these tools? There are most likely a couple of ways in which assailants could utilize machine learning to their advantage. At a minimum, malware authors will evaluate their new malware against the new class of sophisticated hazard security products in a quest to modify their code to make sure that it is less likely to be flagged as destructive. The efficiency of protective security controls constantly has a half life due to adversarial learning. An understanding of artificial intelligence defenses will help hackers become more proactive in reducing the effectiveness of artificial intelligence based defenses. An example would be an attacker flooding a network with fake traffic with the intention of “poisoning” the machine learning model being constructed from that traffic. The objective of the opponent would be to trick the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the alerts.

Artificial intelligence will likely also be used as an attack tool by attackers. For example, some researchers forecast that opponents will use artificial intelligence methods to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is especially unpleasant provided the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a potent financial reward for hackers to embrace the techniques.

Expect the kind of breaches that deliver ransomware payloads to rise greatly in 2017.

The need to automate jobs is a major motivation of investment choices for both aggressors and defenders. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard component of defense in depth strategies, it is not a magic bullet. It needs to be understood that hackers are actively dealing with evasion approaches around machine learning based detection solutions while also using machine learning for their own attack functions. This arms race will need protectors to significantly accomplish incident response at machine speed, additionally worsening the need for automated incident response abilities.