Charles Leaver – Don’t Struggle With The WannaCry Ransomware Issue Ziften Can Help

Written By Michael Vaughn And Presented By Charles Leaver Ziften CEO

 

Answers To Your Concerns About WannaCry Ransomware

The WannaCry ransomware attack has infected more than 300,000 computer systems in 150 nations up until now by making use of vulnerabilities in Microsoft’s Windows operating system.
In this brief video Chief Data Scientist Dr. Al Hartmann and I talk about the nature of the attack, as well as how Ziften can assist organizations safeguard themselves from the vulnerability known as “EternalBlue.”.

As discussed in the video, the problem with this Server Message Block (SMB) file-sharing service is that it’s on many Windows operating systems and discovered in many environments. However, we make it simple to determine which systems in your environment have actually or haven’t been patched yet. Importantly, Ziften Zenith can also remotely disable the SMB file-sharing service totally, offering companies important time to make sure that those machines are effectively patched.

If you wonder about Ziften Zenith, our 20 minute demo consists of an assessment with our specialists around how we can help your organization avoid the worst digital disaster to strike the internet in years.

Charles Leaver – 10 Ways To Assess Next Gen Endpoint Security Services

Written By Roark Pollock And Presented By Charles Leaver CEO Ziften

 

The Endpoint Security Purchaser’s Guide

The most typical point for an innovative relentless attack or a breach is the end point. And they are definitely the entry point for most ransomware and social engineering attacks. The use of endpoint protection services has long been thought about a best practice for securing endpoints. Sadly, those tools aren’t staying up to date with today’s danger environment. Advanced risks, and truth be told, even less advanced threats, are frequently more than adequate for tricking the average employee into clicking something they should not. So organizations are looking at and examining a wide variety of next generation end point security (NGES) solutions.

With this in mind, here are ten pointers to consider if you’re taking a look at NGES solutions.

Idea 1: Start with the end first

Don’t let the tail wag the dog. A threat reduction method ought to constantly start by evaluating problems then trying to find prospective solutions for those problems. But all frequently we get captivated with a “shiny” brand-new technology (e.g., the current silver bullet) and we wind up aiming to shoehorn that innovation into our environments without fully examining if it solves a comprehended and determined issue. So what problems are you trying to solve?

– Is your current endpoint security tool failing to stop risks?
– Do you need better visibility into activities at the endpoint?
– Are compliance requirements mandating constant endpoint monitoring?
– Are you trying to reduce the time and costs of incident response?

Specify the issues to address, and then you’ll have a measuring stick for success.

Pointer 2: Know your audience. Who will be utilizing the tool?

Understanding the issue that has to be fixed is a crucial first step in understanding who owns the issue and who would (operationally) own the service. Every functional team has its strengths, weaknesses, choices and biases. Define who will need to utilize the service, and others that could gain from its usage. It could be:

– Security team,
– IT operations,
– The governance, risk & compliance (GRC) group,
– Helpdesk or end user support team,
– Or perhaps the server team, or a cloud operations team?

Suggestion 3: Know exactly what you imply by end point

Another typically neglected early step in specifying the issue is specifying the endpoint. Yes, all of us used to know exactly what we implied when we stated end point but today endpoints are available in a lot more varieties than before.

Sure we want to safeguard desktops and laptops however how about mobile phones (e.g. smartphones and tablets), virtual endpoints, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, naturally, come in numerous flavors so platform support has to be attended to too (e.g. Windows only, Mac OSX, Linux, etc?). Likewise, consider assistance for endpoints even when they are working remote, or are working offline. Exactly what are your needs and exactly what are “great to haves?”

Tip 4: Start with a foundation of all the time visibility

Constant visibility is a foundational ability for addressing a host of security and functional management issues on the endpoint. The old expression holds true – that you can’t manage exactly what you cannot see or measure. Even more, you cannot secure exactly what you cannot appropriately manage. So it should start with continuous or all-the-time visibility.

Visibility is foundational to Security and Management

And think about what visibility suggests. Enterprises need a single source of fact that at a minimum monitors, saves, and examines the following:

– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and use patterns
– Binary data – characteristics of set up binaries
– Processes data – tracking details and statistics
– Network connectivity data – statistics and internal habits of network activity on the host

Idea 5: Track your visibility data

Endpoint visibility data can be kept and examined on the premises, in the cloud, or some combination of both. There are advantages to each. The appropriate method differs, but is generally enforced by regulative requirements, internal privacy policies, the endpoints being monitored, and the total expense considerations.

Know if your organization needs on-premise data retention

Know whether your company enables cloud based data retention and analysis or if you are constrained to on-premise options only. Within Ziften, 20-30% of our clients keep data on-premise merely for regulatory factors. However, if lawfully an alternative, the cloud can offer expense benefits (among others).

Tip 6: Know what is on your network

Comprehending the issue you are trying to solve requires understanding the assets on the network. We have found that as many as 30% of the end points we initially discover on clients’ networks are unmanaged or unidentified devices. This obviously develops a big blind spot. Minimizing this blind spot is a critical best practice. In fact, SANS Critical Security Controls 1 and 2 are to perform a stock of licensed and unauthorized devices and software applications connected to your network. So search for NGES solutions that can fingerprint all linked devices, track software applications stock and utilization, and carry out ongoing continuous discovery.

Pointer 7: Know where you are exposed

After finding out what devices you need to watch, you need to make certain they are running in up to date configurations. SANS Critical Security Controls 3 recommends ensuring safe and secure setups monitoring for laptops, workstations, and servers. SANS Critical Security Controls 4 suggests making it possible for constant vulnerability assessment and removal of these devices. So, try to find NGES solutions that provide constant tracking of the state or posture of each device, and it’s even of more benefit if it can assist impose that posture.

Likewise search for solutions that deliver constant vulnerability evaluation and remediation.

Keeping your general endpoint environment hardened and devoid of crucial vulnerabilities prevents a substantial amount of security concerns and removes a lot of back end pressure on the IT and security operations groups.

Idea 8: Cultivate continuous detection and response

A crucial end goal for numerous NGES solutions is supporting continuous device state monitoring, to enable efficient threat or event response. SANS Critical Security Control 19 advises robust incident response and management as a best practice.

Look for NGES services that offer all the time or constant danger detection, which leverages a network of worldwide risk intelligence, and several detection methods (e.g., signature, behavioral, artificial intelligence, etc). And search for incident response solutions that help prioritize identified hazards and/or problems and provide workflow with contextual system, application, user, and network data. This can assist automate the appropriate response or next actions. Lastly, understand all the response actions that each service supports – and search for a service that provides remote access that is as close as possible to “sitting at the endpoint keyboard”.

Idea 9: Think about forensics data collection

In addition to incident response, organizations need to be prepared to attend to the need for forensic or historical data analysis. The SANS Critical Security Control 6 suggests the upkeep, tracking and analysis of all audit logs. Forensic analysis can take lots of kinds, however a foundation of historic end point tracking data will be essential to any examination. So try to find services that maintain historic data that permits:

– Forensic jobs include tracing lateral danger movement through the network gradually,
– Pinpointing data exfiltration efforts,
– Identifying source of breaches, and
– Figuring out suitable remediation actions.

Tip 10: Take down the walls

IBM’s security team, which supports an excellent ecosystem of security partners, estimates that the average business has 135 security tools in situ and is working with 40 security vendors. IBM customers certainly skew to large businesses but it’s a typical refrain (problem) from organizations of all sizes that security products don’t integrate well enough.

And the grievance is not simply that security services don’t play well with other security services, however also that they don’t constantly integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations have to think about these (and other) integration points along with the vendor’s desire to share raw data, not simply metadata, through an API.

Additional Suggestion 11: Plan for customizations

Here’s a bonus suggestion. Assume that you’ll want to tailor that glossy new NGES service quickly after you get it. No solution will fulfill all of your requirements right out of the box, in default configurations. Find out how the service supports:

– Custom data collection,
– Signaling and reporting with custom data,
– Customized scripting, or
– IFTTT (if this then that) performance.

You know you’ll want new paint or new wheels on that NGES solution quickly – so ensure it will support your future modification projects simply enough.

Look for support for simple personalizations in your NGES solution

Follow the bulk of these pointers and you’ll unquestionably prevent much of the common mistakes that plague others in their evaluations of NGES services.

Charles Leaver – Protection From End To End Is Best Done By Ziften

Written By Ziften CEO Charles Leaver

 

Do you wish to manage and safeguard your endpoints, your data center, your network and the cloud? Well Ziften has the best solution for you. We gather data, and allow you to associate and utilize that data to make choices – and be in control over your enterprise.

The information that we receive from everyone on the network can make a real world distinction. Consider the inference that the U.S. elections in 2016 were affected by hackers in another country. If that’s the case, hackers can do almost anything – and the idea that we’ll choose that as the status quo is simply ridiculous.

At Ziften, we believe the best method to fight those risks is with greater visibility than you have actually ever had. That visibility crosses the whole business, and connects all the significant players together. On the back end, that’s real and virtual servers in the cloud and in the data center. That’s infrastructure and applications and containers. On the other side, it’s laptops and desktops, no matter where and how they are connected.

End to end – that’s the thinking behind everything at Ziften. From endpoint to cloud, right the way from a browser to a DNS server. We tie all that together, with all the other components to offer your service a total solution.

We also catch and save real time data for as much as 12 months to let you understand what’s occurring on the network right now, and offer historic pattern analysis and cautions if something changes.

That lets you spot IT faults and security problems right away, and also have the ability to ferret out the root causes by recalling in time to uncover where a breach or fault may have initially taken place. Active forensics are a total need in this business: After all, where a fault or breach triggered an alarm may not be the place where the issue started – or where a hacker is operating.

Ziften supplies your security and IT groups with the visibility to comprehend your present security posture, and recognize where improvements are required. Non-compliant endpoints? They will be found. Rogue devices? These will be discovered. Penetration off-network? This will be detected. Obsolete firmware? Unpatched applications? All discovered. We’ll not just assist you discover the issue, we’ll assist you repair it, and make sure it stays repaired.

End to end IT and security management. Real-time and historic active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s exactly what makes Ziften so much better.

Charles Leaver – Monitoring Of Activities In The Cloud Is Now Possible With Enhanced NetFlow

Written by Roark Pollock and Presented by Ziften CEO Charles Leaver

 

According to Gartner the public cloud services market exceeded $208 billion last year (2016). This represented about a 17% increase year over year. Pretty good considering the on-going issues most cloud consumers still have relating to data security. Another particularly intriguing Gartner finding is the typical practice by cloud consumers to contract services to several public cloud providers.

In accordance with Gartner “most businesses are currently utilizing a combination of cloud services from various cloud service providers”. While the business rationale for using several vendors is sound (e.g., preventing supplier lock in), the practice does develop extra complexity inmonitoring activity across an organization’s significantly fragmented IT landscape.

While some providers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) organizations have to comprehend and resolve the visibility issues associated with relocating to the cloud despite the cloud supplier or service providers they work with.

Unfortunately, the capability to track application and user activity, and networking communications from each VM or endpoint in the cloud is limited.

Irrespective of where computing resources live, companies must address the concerns of “Which users, machines, and applications are communicating with each other?” Organizations need visibility across the infrastructure so that they can:

  • Quickly recognize and prioritize problems
  • Speed root cause analysis and recognition
  • Lower the mean time to repair issues for end users
  • Quickly recognize and eliminate security risks, reducing general dwell times.

On the other hand, bad visibility or poor access to visibility data can decrease the effectiveness of existing management and security tools.

Companies that are used to the maturity, ease, and reasonably cheapness of monitoring physical data centers are apt to be dissatisfied with their public cloud options.

What has actually been missing is an easy, ubiquitous, and sophisticated solution like NetFlow for public cloud infrastructure.

NetFlow, of course, has had twenty years approximately to become a de facto standard for network visibility. A typical deployment involves the tracking of traffic and aggregation of flows where the network chokes, the collection and saving of flow data from multiple collection points, and the analysis of this flow data.

Flows include a basic set of source and destination IP addresses and port and protocol info that is typically gathered from a router or switch. Netflow data is relatively inexpensive and easy to gather and provides almost ubiquitous network visibility and enables actionable analysis for both network monitoring and performance management applications.

The majority of IT staffs, especially networking and some security groups are extremely comfy with the technology.

But NetFlow was developed for fixing what has become a rather restricted issue in the sense that it only gathers network information and does so at a minimal variety of possible locations.

To make better use of NetFlow, 2 essential modifications are essential.

NetFlow to the Edge: First, we have to expand the useful implementation scenarios for NetFlow. Instead of only gathering NetFlow at networking choke points, let’s broaden flow collection to the network edge (cloud, servers and clients). This would considerably expand the overall view that any NetFlow analytics supply.

This would permit organizations to augment and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.

Rich, contextual NetFlow: Second, we have to use NetFlow for more than basic network visibility.

Instead, let’s utilize an extended version of NetFlow and include data on the user, device,
application, and binary responsible for each monitored network connection. That would allow us to rapidly connect every network connection back to its source.

In fact, these two modifications to NetFlow, are precisely what Ziften has actually accomplished with ZFlow. ZFlow offers an expanded version of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting info collection can be consumed and analyzed with existing NetFlow analysis tools. Over and above standard NetFlow Internet Protocol Flow Info eXport (IPFIX) visibility of the network, ZFlow provides greater visibility with the addition of info on user, device, application and binary for each network connection.

Eventually, this allows Ziften ZFlow to deliver end-to-end visibility in between any 2 endpoints, physical or virtual, removing standard blind spots like East West traffic in data centers and enterprise cloud deployments.

Charles Leaver – This Is Why Edit Difference Is Used In Detection Part 2

Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften

 

In the very first about edit distance, we took a look at searching for destructive executables with edit distance (i.e., the number of character edits it requires to make two text strings match). Now let’s look at how we can utilize edit distance to search for malicious domains, and how we can utilize edit distance features that can be combined with other domain features to identify suspicious activity.

Here is the Background

What are bad actors playing at with malicious domains? It could be merely using a close spelling of a common domain to trick careless users into looking at advertisements or picking up adware. Genuine sites are gradually catching onto this technique, sometimes called typo squatting.

Other destructive domains are the result of domain generation algorithms, which could be utilized to do all types of wicked things like avert countermeasures that block known compromised sites, or overwhelm domain name servers in a distributed DoS attack. Older variations utilize randomly generated strings, while more advanced ones add techniques like injecting typical words, further confusing defenders.

Edit distance can assist with both usage cases: let’s see how. First, we’ll omit common domains, considering that these are generally safe. And, a list of typical domains supplies a baseline for detecting anomalies. One excellent source is Quantcast. For this discussion, we will adhere to domain names and prevent sub-domains (e.g. ziften.com, not www.ziften.com).

After data cleaning, we compare each candidate domain name (input data observed in the wild by Ziften) to its possible next-door neighbors in the very same top-level domain (the tail end of a domain name – classically.com,. org, etc. and now can be almost anything). The standard task is to find the nearest next-door neighbor in terms of edit distance. By discovering domain names that are one step away from their nearby next-door neighbor, we can quickly spot typo-ed domain names. By finding domain names far from their next-door neighbor (the normalized edit distance we introduced in Part 1 is beneficial here), we can also find anomalous domain names in the edit distance area.

What were the Results?

Let’s take a look at how these outcomes appear in reality. Use caution when browsing to these domains since they might contain malicious material!

Here are a few possible typos. Typo squatters target well known domains considering that there are more opportunities somebody will visit. Several of these are suspicious according to our danger feed partners, but there are some false positives as well with charming names like “wikipedal”.

Here are some weird looking domain names far from their neighbors.

So now we have produced 2 beneficial edit distance metrics for hunting. Not just that, we have three functions to potentially add to a machine learning design: rank of nearest next-door neighbor, range from next-door neighbor, and edit distance 1 from neighbor, indicating a danger of typo shenanigans. Other features that might be utilized well with these are other lexical features such as word and n-gram distributions, entropy, and the length of the string – and network functions like the number of unsuccessful DNS demands.

Simple Code that you can Play Around with

Here is a simplified version of the code to play with! Developed on HP Vertica, however this SQL ought to work on most innovative databases. Keep in mind the Vertica editDistance function may vary in other applications (e.g. levenshtein in Postgres or UTL_MATCH. EDIT_DISTANCE in Oracle).

Charles Leaver – Manage Your Environment Properly Or Suffer Security Problems And Vice Versa

Written by Charles Leaver Ziften CEO

 

If your business computing environment is not correctly managed there is no chance that it can be totally protected. And you can’t efficiently manage those complex business systems unless there’s a strong feeling that they are protected.

Some might call this a chicken-and-egg situation, where you have no idea where to begin. Should you begin with security? Or should you start with the management of your system? That’s the incorrect technique. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate first. It’s not peanut butter initially. Instead, both are mixed together – and treated as a single scrumptious treat.

Many organizations, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group do not know each other, speak to each other just when absolutely essential, have unique budget plans, definitely have separate concerns, check out different reports, and make use of various management platforms. On a day-to-day basis, what makes up a job, an issue or an alert for one team flies completely under the other team’s radar.

That’s bad, because both the IT and security groups need to make assumptions. The IT team believes that everything is secure, unless someone notifies them otherwise. For instance, they assume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have been used, and so on

Since the CIO and CISO teams aren’t talking with each other, do not comprehend each others’ functions and priorities, and aren’t using the exact same tools, those assumptions might not be appropriate.

And again, you cannot have a secure environment unless that environment is properly managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An unsecure environment makes anything you do in the IT organization suspect and irrelevant, and indicates that you can’t know whether the details you are seeing are correct or manipulated. It may all be phony news.

How to Bridge the IT / Security Space

The best ways to bridge that gap? It sounds easy however it can be challenging: Make sure that there is an umbrella covering both the IT and security teams. Both IT and security report to the very same individual or organization somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.

If the business doesn’t have a secure environment, and there’s a breach, the value of the brand and the company can be reduced to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the business cannot work efficiently, and the value drops. As we have actually talked about, if it’s not well managed, it can’t be secured, and if it’s not protected, it cannot be well handled.

The fiduciary responsibility of senior executives (like the CFO) is to secure the worth of organizational assets, and that indicates making sure IT and security talk to each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and shown to be significant to their specific areas of duty.

That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, created similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT teams exactly what they need to do their jobs, and gives security groups what they require as well – without coverage spaces that might undermine assumptions about the state of enterprise security and IT management.

We need to guarantee that our organization’s IT infrastructure is developed on a safe and secure structure – and that our security is executed on a well managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.

Charles Leaver – Remote Connections Are Increasing So Constant Visibility Of The Endpoint Is Crucial

Written By Roark Pollock And Presented By Charles Leaver Ziften CEO

 

A survey just recently finished by Gallup found that 43% of Americans that were in employment worked from another location for a few of their work time in 2016. Gallup, who has actually been surveying telecommuting patterns in the USA for practically a 10 years, continues to see more staff members working outside of conventional workplaces and an increasing number of them doing so for a greater number of days from the week. And, naturally the variety of connected devices that the average employee uses has actually jumped too, which assists drive the benefit and preference of working far from the workplace.

This freedom certainly makes for better employees, and it is hoped more productive employees, however the issues that these patterns present for both security and systems operations groups ought to not be dismissed. IT asset discovery, IT systems management, and threat detection and response functions all benefit from real-time and historic visibility into user, device, application, and network connection activity. And to be truly reliable, endpoint visibility and monitoring should work no matter where the user and device are operating, be it on the network (regional), off the network however linked (remotely), or disconnected (not online). Current remote working patterns are significantly leaving security and functional groups blind to potential issues and risks.

The mainstreaming of these patterns makes it a lot more difficult for IT and security groups to limit what was before deemed greater risk user behavior, such as working from a coffeehouse. But that ship has actually sailed and today systems management and security teams need to be able to thoroughly track user, device, application, and network activity, find abnormalities and unsuitable actions, and impose suitable action or fixes regardless of whether an endpoint is locally linked, remotely connected, or disconnected.

In addition, the fact that numerous employees now routinely access cloud-based applications and assets, and have back-up USB or network connected storage (NAS) drives at their homes additionally amplifies the need for endpoint visibility. Endpoint controls often supply the one and only record of remote activity that no longer always ends in the corporate network. Offline activity presents the most extreme example of the need for constant endpoint monitoring. Clearly network controls or network tracking are of little use when a device is running offline. The setup of a proper endpoint agent is crucial to guarantee the capture of all important security and system data.

As an example of the kinds of offline activity that could be found, a client was recently able to track, flag, and report unusual behavior on a business laptop. A high level executive moved huge quantities of endpoint data to an unauthorized USB drive while the device was offline. Due to the fact that the endpoint agent had the ability to collect this behavioral data during this offline duration, the customer had the ability to see this unusual action and follow up properly. Continuing to monitor the device, applications, and user habits even when the endpoint was disconnected, gave the client visibility they never had previously.

Does your organization have continuous tracking and visibility when employee endpoints are not connected? If so, how do you achieve this?

Charles Leaver – When Machine Learning Is Introduced These Consequences Will Follow

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

If you are a student of history you will observe numerous examples of serious unexpected consequences when new technology has actually been presented. It typically surprises individuals that new technologies might have dubious intentions in addition to the positive purposes for which they are brought to market but it takes place all the time.

For instance, Train robbers using dynamite (“You believe you utilized enough Dynamite there, Butch?”) or spammers using email. Just recently the use of SSL to conceal malware from security controls has actually ended up being more common just because the genuine use of SSL has actually made this technique better.

Since new technology is typically appropriated by bad actors, we have no need to believe this will not hold true about the new generation of machine learning tools that have actually reached the market.

To what degree will there be misuse of these tools? There are most likely a couple of ways in which assailants could utilize machine learning to their advantage. At a minimum, malware authors will evaluate their new malware against the new class of sophisticated hazard security products in a quest to modify their code to make sure that it is less likely to be flagged as destructive. The efficiency of protective security controls constantly has a half life due to adversarial learning. An understanding of artificial intelligence defenses will help hackers become more proactive in reducing the effectiveness of artificial intelligence based defenses. An example would be an attacker flooding a network with fake traffic with the intention of “poisoning” the machine learning model being constructed from that traffic. The objective of the opponent would be to trick the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the alerts.

Artificial intelligence will likely also be used as an attack tool by attackers. For example, some researchers forecast that opponents will use artificial intelligence methods to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is especially unpleasant provided the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a potent financial reward for hackers to embrace the techniques.

Expect the kind of breaches that deliver ransomware payloads to rise greatly in 2017.

The need to automate jobs is a major motivation of investment choices for both aggressors and defenders. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard component of defense in depth strategies, it is not a magic bullet. It needs to be understood that hackers are actively dealing with evasion approaches around machine learning based detection solutions while also using machine learning for their own attack functions. This arms race will need protectors to significantly accomplish incident response at machine speed, additionally worsening the need for automated incident response abilities.

Charles Leaver – Monitoring Certain Commands Can Reveal Threats

Written By Josh Harriman And Presented By Charles Leaver Ziften CEO

 

The repetition of a theme when it comes to computer system security is never a bad thing. As advanced as some cyber attacks can be, you really have to watch for and understand using typical readily offered tools in your environment. These tools are typically utilized by your IT staff and probably would be white listed for use and can be missed out on by security groups mining through all the appropriate applications that ‘might’ be performed on an endpoint.

Once somebody has breached your network, which can be done in a range of ways and another blog post for another day, signs of these programs/tools running in your environment should be examined to make sure proper usage.

A few tools/commands and their features:

Netstat – Information on the present connections on the system. This may be used to determine other systems within the network.

Powershell – Integrated Windows command line utility and can carry out a variety of actions such as obtaining crucial info about the system, eliminating procedures, adding files or deleting files and so on

WMI – Another powerful built-in Windows utility. Can move files around and gather essential system info.

Route Print – Command to see the local routing table.

Net – Including domains/groups/users/accounts.

RDP (Remote Desktop Protocol) – Program to access systems remotely.

AT – Arranged jobs.

Looking for activity from these tools can consume a lot of time and sometimes be frustrating, but is essential to deal with who might be shuffling around in your environment. And not just what is taking place in real-time, but in the past also to see a course someone may have taken through the network. It’s frequently not ‘patient zero’ that is the target, once they get a foothold, they could utilize these commands and tools to start their reconnaissance and lastly shift to a high worth asset. It’s that lateral movement that you would like to find.

You need to have the capability to collect the info discussed above and the means to sort through to discover, alert, and investigate this data. You can utilize Windows Events to monitor different modifications on a device and after that filter that down.

Looking at some screen shots below from our Ziften console, you can see a quick difference in between exactly what our IT group utilized to push out changes in the network, versus someone running a very comparable command themselves. This may be similar to what you find when someone did that remotely say by means of an RDP session.

An intriguing side note in these screenshots is that in all scenarios, the Process Status is ‘Terminated’. You would not see this detail throughout a live examination or if you were not constantly collecting the data. However given that we are gathering all the information continually, you have this historic data to look at. If in case you were seeing the Status as ‘Running’, this could show that someone is live on that system as of now.

This only scratches the surface of what you should be collecting and how to analyze what is correct for your environment, which naturally will be different than that of others. But it’s a start. Destructive actors with intent to do you harm will normally try to find the path of least resistance. Why try and produce new and intriguing tools, when a lot of what they need is currently there and all set to go.

Charles Leaver – Why Forensic Analysis And Incident Response Are Both Essential

Written By Roark Pollock And Presented By Ziften CEO Charles Leaver

 

There might be a joke somewhere regarding the forensic analyst that was late to the incident response party. There is the seed of a joke in the concept at least however of course, you have to comprehend the differences between incident response and forensic analysis to appreciate the capacity for humor.

Incident response and forensic analysis are associated disciplines that can leverage similar tools and related data sets however also have some crucial differences. There are four especially essential differences between incident response and forensic analysis:

– Goals.
– Requirements for data.
– Group skills.
– Benefits.

The distinction in the goals of incident response and forensic analysis is possibly the most essential. Incident response is concentrated on determining a fast (i.e., near real time) reaction to an instant threat or issue. For example, a home is on fire and the firemen that show up to put that fire out are associated with incident response. Forensic analysis is usually performed as part of an arranged compliance, legal discovery, or law enforcement examination. For instance, a fire detective might examine the remains of that home fire to figure out the overall damage to the property, the cause of the fire, and whether the source was such that other houses are likewise at risk. To puts it simply, incident response is focused on containment of a risk or concern, while forensic analysis is concentrated on a complete understanding and extensive removal of a breach.

A 2nd major difference between the disciplines is the data resources required to achieve the objectives. Incident response teams normally just need short term data sources, frequently no greater than a month or thereabouts, while forensic analysis teams normally require much longer lived logs and files. Remember that the average dwell time of a successful attack is somewhere between 150 and 300 days.

While there is commonness in the workers skills of forensic analysis and incident response teams, and in fact incident response is often considered a subset of the border forensic discipline, there are important distinctions in task requirements. Both types of research need strong log analysis and malware analysis abilities. Incident response requires the capability to rapidly separate a contaminated device and to establish methods to remediate or quarantine the device. Interactions tend to be with other security and operations staff member. Forensic analysis typically requires interactions with a much more comprehensive set of departments, including legal, compliance, operations and HR.

Not remarkably, the perceived benefits of these activities also differ.

The capability to eliminate a risk on one device in near real time is a major determinate in keeping breaches isolated and limited in impact. Incident response, and proactive hazard searching, is first line of defense in security operations. Forensic analysis is incident responses’ less glamorous relative. Nevertheless, the advantages of this work are undeniable. A thorough forensic examination enables the removal of all dangers with the cautious analysis of an entire attack chain of events. And that is nothing to laugh about.

Do your endpoint security procedures make provision for both immediate incident response, and long term historical forensic analysis?