Charles Leaver – Robust Endpoint Monitoring Cannot Be Achieved With Narrow Indicators Of Compromise

Presented By Charles Leaver And Written By Dr Al Hartmann Of Ziften Inc.


The Breadth Of The Indication – Broad Versus Narrow

An extensive report of a cyber attack will typically supply information of indicators of compromise. Typically these are slim in their scope, referencing a particular attack group as viewed in a particular attack on an organization for a minimal time period. Generally these narrow indicators are specific artifacts of an observed attack that might make up specific evidence of compromise on their own. For the attack it suggests that they have high uniqueness, but frequently at the expense of low level of sensitivity to comparable attacks with other artifacts.

Basically, slim indicators offer very restricted scope, and it is the factor that they exist by the billions in massive databases that are continually broadening of malware signatures, network addresses that are suspicious, destructive pc registry keys, file and packet content snippets, filepaths and invasion detection guidelines and so on. The continuous endpoint monitoring system provided by Ziften aggregates a few of these 3rd party databases and risk feeds into the Ziften Knowledge Cloud, to gain from known artifact detection. These detection elements can be used in real time in addition to retrospectively. Retrospective application is necessary given the short term attributes of these artifacts as hackers continually render obscure the info about their cyber attacks to irritate this narrow IoC detection technique. This is the reason that a continuous monitoring service should archive monitoring results for a very long time (in relation to market reported typical attacker dwell times), to supply a sufficient lookback horizon.

Narrow IoC’s have significant detection worth however they are mainly inadequate in the detection of new cyber attacks by experienced hackers. New attack code can be pre checked against typical enterprise security products in laboratory environments to confirm non-reuse of artifacts that are detectable. Security solutions that operate merely as black/white classifiers struggle with this weakness, i.e. by providing a specific decision of harmful or benign. This technique is extremely easily averted. The safeguarded organization is likely to be completely hacked for months or years prior to any detectable artifacts can be recognized (after extensive examination) for the particular attack circumstances.

In contrast to the simplicity with which cyber attack artifacts can be obscured by typical hacker toolkits, the particular techniques and strategies – the modus operandi – used by hackers have been sustained over a number of years. Common strategies such as weaponized websites and documents, new service installation, vulnerability exploitation, module injection, delicate folder and computer system registry area adjustment, brand-new set up tasks, memory and drive corruption, credentials compromise, malicious scripting and lots of others are broadly normal. The right usage of system logging and monitoring can find a lot of this particular attack activity, when appropriately paired with security analytics to focus on the highest threat observations. This entirely removes the opportunity for hackers to pre test the evasiveness of their destructive code, because the quantification of threats is not black and white, but nuanced shades of gray. In particular, all endpoint threat is varying and relative, throughout any network/ user environment and time period, and that environment (and its temporal dynamics) can not be duplicated in any laboratory environment. The fundamental attacker concealment method is foiled.

In future posts we will examine Ziften endpoint risk analysis in greater detail, as well as the vital relationship between endpoint security and endpoint management. “You can’t protect what you don’t manage, you can’t manage what you don’t measure, you can’t measure what you do not track.” Organizations get breached because they have less oversight and control of their endpoint environment than the cyber attackers have. Keep an eye out for future posts…


Carbanak Part Three How Indicators Of Compromise Would Have Been Flagged By Ziften Continuous Endpoint Monitoring – Charles Leaver

Presented By Charles Leaver And Written By Dr Al Hartmann


Part 3 in a 3 part series


Below are excerpts of Indicators of Compromise (IoC) from the technical reports on the Anunak/Carbanak APT attacks, with discussions their discovery by the Ziften continuous endpoint monitoring service. The Ziften solution has a focus on generic indicators of compromise that have corresponded for years of hacker attacks and cyber security experience. IoC’s can be identified for any os such as Linux, OS X and Windows. Specific indicators of compromise also exist that indicate C2 infrastructure or particular attack code instances, however these are not used long term and not usually used once again in fresh attacks. There are billions of these artifacts in the cyber security world with thousands being included each day. Generic IoC’s are ingrained for the supported operating systems by the Ziften security analytics, and the specific IoC’s are used by the Ziften Knowledge Cloud from memberships to a number of industry risk feeds and watch lists that aggregate these. These both have worth and will help in the triangulation of attack activity.

1. Exposed vulnerabilities

Excerpt: All observed cases utilized spear phishing e-mails with Microsoft Word 97– 2003 (. doc) files attached or CPL files. The doc files manipulate both Microsoft Office (CVE-2012-0158 and CVE-2013-3906) and Microsoft Word (CVE- 2014-1761).

Comment: Not really a IoC, critical exposed vulnerabilities are a significant hacker exploit and is a large warning that increases the threat rating (and the SIEM priority) for the end point, especially if other indications are also present. These vulnerabilities are indications of lazy patch management and vulnerability lifecycle management which causes a weakened cyber defense position.

2. Locations That Are Suspect

Excerpt: Command and Control (C2) servers situated in China have actually been determined in this project.

Remark: The geolocation of endpoint network touches and scoring by location both add to the danger rating that increases the SIEM priority. There are valid situations for having contact with Chinese servers, and some companies may have installations situated in China, but this ought to be verified with spatial and temporal checking of abnormalities. IP address and domain details should be included with a resulting SIEM alarm so that SOC triage can be performed quickly.

3. Binaries That Are New

Excerpt: Once the remote code execution vulnerability is successfully exploited, it installs Carbanak on the victim’s system.

Comment: Any new binaries are constantly suspicious, however not all of them must be alerted. The metadata of images should be analyzed to see if there is a pattern, for instance a brand-new app or a new variation of an existing app from an existing supplier on a likely file path for that vendor and so on. Hackers will try to spoof apps that are whitelisted, so signing data can be compared as well as size, file size and filepath etc to filter out obvious instances.

4. Uncommon Or Sensitive Filepaths

Excerpt: Carbanak copies itself into “% system32% com” with the name “svchost.exe” with the file attributes: system, hidden and read-only.

Comment: Any writing into the System32 filepath is suspicious as it is a sensitive system folder, so it is subject to scrutiny by checking anomalies instantly. A classic anomaly would be svchost.exe, which is a crucial system procedure image, in the unusual place the com subdirectory.

5. New Autostarts Or Services

Excerpt: To make sure that Carbanak has autorun privileges the malware creates a new service.

Comment: Any autostart or brand-new service prevails with malware and is always checked with the analytics. Anything low prevalence would be suspicious. If checking the image hash against market watchlists leads to an unknown quantity to most of the anti-virus engines this will raise suspicions.

6. Low Prevalence File In High Prevalence Folder

Excerpt: Carbanak develops a file with a random name and a.bin extension in %COMMON_APPDATA% Mozilla where it saves commands to be carried out.

Remark: This is a traditional example of “one of these things is not like the other” that is easy for the security analytics to check (continuous monitoring environment). And this IoC is totally generic, has definitely nothing to do with which filename or which directory is created. Even though the technical security report notes it as a specific IoC, it is trivially genericized beyond Carabanak to future attacks.

7. Suspect Signer

Excerpt: In order to render the malware less suspicious, the current Carbanak samples are digitally signed

Comment: Any suspect signer will be treated as suspicious. One case was where a signer supplies a suspect anonymous gmail email address, which does not inspire confidence, and the danger rating will rise for this image. In other cases no e-mail address is offered. Signers can be quickly noted and a Pareto analysis performed, to identify the more versus less trusted signers. If a less trusted signer is found in a more delicate directory then this is extremely suspicious.

8. Remote Administration Tools

Excerpt: There seems a preference for the Ammyy Admin remote administration tool for remote control believed that the attackers used this remote administration tool because it is typically whitelisted in the victims’ environments as a result of being utilized regularly by administrators.

Remark: Remote admin tools (RAT) always raise suspicions, even if they are whitelisted by the company. Checking of anomalies would happen to identify whether temporally or spatially each brand-new remote admin tool corresponds. RAT’s go through abuse. Hackers will always choose to use the RAT’s of a company so that they can avoid detection, so they need to not be given access each time just because they are whitelisted.

9. Patterns Of Remote Login

Excerpt: Logs for these tools indicate that they were accessed from 2 different IPs, probably used by the hackers, and located in Ukraine and France.

Comment: Always suspect remote logins, since all hackers are presumed to be remote. They are likewise utilized a lot with insider attacks, as the insider does not want to be recognized by the system. Remote addresses and time pattern anomalies would be inspected, and this should expose low prevalence usage (relative to peer systems) plus any suspect locations.

10. Atypical IT Tools

Excerpt: We have also found traces of various tools utilized by the attackers inside the victim ´ s network to gain control of additional systems, such as Metasploit, PsExec or Mimikatz.

Remark: Being sensitive apps, IT tools should constantly be checked for anomalies, since numerous hackers overturn them for harmful purposes. It is possible that Metasploit could be utilized by a penetration tester or vulnerability scientist, but instances of this would be rare. This is a prime example where an unusual observation report for the vetting of security personnel would lead to restorative action. It also highlights the issue where blanket whitelisting does not help in the recognition of suspicious activity.


Charles Leaver – Part Two Of The Carbanak Case Study And Why Continuous Endpoint Monitoring Is So Efficient

Presented By Charles Leaver And Written By Dr Al Hartmann

Part 2 in a 3 part series


Continuous Endpoint Monitoring Is Really Effective


Convicting and obstructing harmful scripts before it is able to jeopardize an endpoint is great. However this method is mostly inefficient against cyber attacks that have been pre evaluated to evade this sort of technique to security. The real problem is that these hidden attacks are conducted by skilled human hackers, while conventional defense of the endpoint is an automatic procedure by endpoint security systems that rely mostly on basic anti-virus technology. The intelligence of people is more innovative and flexible than the intelligence of machines and will always be superior to automatic machine defenses. This highlights the findings of the Turing test, where automated defenses are attempting to adapt to the intellectual level of a knowledgeable human hacker. At present, artificial intelligence and machine learning are not sophisticated enough to completely automate cyber defense, the human hacker is going to be victorious, while those attacked are left counting their losses. We are not residing in a science fiction world where machines can out think human beings so you must not believe that a security software application suite will automatically look after all your issues and avoid all attacks and information loss.

The only real method to prevent a resolute human hacker is with an undaunted human cyber defender. In order to engage your IT Security Operations Center (SOC) staff to do this, they should have complete visibility of network and endpoint operations. This sort of visibility will not be accomplished with standard endpoint anti-viruses suites, instead they are created to remain quiet unless enabling a capture and quarantining malware. This standard approach renders the endpoints opaque to security personnel, and the hackers use this endpoint opacity to hide their attacks. This opacity extends backwards and forwards in time – your security workers have no idea exactly what was running across your endpoint population previously, or at this moment, or exactly what can be anticipated in the future. If thorough security personnel find clues that require a forensic look back to reveal hacker characteristics, your antivirus suite will be unable to assist. It would not have acted at the time so no events will have been recorded.

In contrast, continuous endpoint monitoring is always working – offering real time visibility into endpoint operations, supplying forensic look back’s to take action against new evidence of attacks that is emerging and detect indications earlier, and offering a baseline for regular patterns of operation so that it knows exactly what to anticipate and alert any abnormalities in the future. Supplying not only visibility, continuous endpoint monitoring provides informed visibility, with the application of behavioral analytics to identify operations that appear unusual. Abnormalities will be continuously analyzed and aggregated by the analytics and reported to SOC personnel, through the company’s security information event management (SIEM) network, and will flag the most worrying suspicious abnormalities for security personnel interest and action. Continuous endpoint monitoring will enhance and scale human intelligence and not replace it. It is a bit like the old game on Sesame Street “One of these things is not like the other.”

A kid can play this game. It is simple because most items (called high prevalence) resemble each other, but one or a small number (called low prevalence) are not the same and stand apart. These dissimilar actions taken by cyber bad guys have actually been pretty consistent in hacking for decades. The Carbanak technical reports that listed the signs of compromise ready examples of this and will be discussed below. When continuous endpoint monitoring security analytics are enacted and reveal these patterns, it is basic to recognize something suspicious or unusual. Cyber security workers will be able to carry out quick triage on these irregular patterns, and rapidly determine a yes/no/maybe reaction that will distinguish uncommon but recognized to be good activities from malicious activities or from activities that require extra tracking and more informative forensics investigations to verify.

There is no way that a hacker can pre test their attacks when this defense application is in place. Continuous endpoint monitoring security has a non-deterministic risk analytics element (that signals suspect activity) in addition to a non-deterministic human component (that carries out alert triage). Depending upon the existing activities, endpoint population mix and the experience of the cyber security personnel, cultivating attack activity may or may not be discovered. This is the nature of cyber warfare and there are no guarantees. However if your cyber security fighters are equipped with continuous endpoint monitoring analytics and visibility they will have an unreasonable advantage.


Charles Leaver – Part One Carbanak Case Study And Continuous Endpoint Monitoring

Presented By Charles Leaver And Written By Dr Al Hartmann


Part 1 in a 3 part series


Carbanak APT Background Particulars

A billion dollar bank raid, which is targeting more than a hundred banks across the world by a group of unidentified cyber crooks, has actually been in the news. The attacks on the banks started in early 2014 and they have been expanding across the globe. Most of the victims suffered devastating infiltrations for a variety of months throughout a number of endpoints prior to experiencing financial loss. The majority of the victims had carried out security procedures that included the application of network and endpoint security systems, however this did not provide a lot of caution or defense against these cyber attacks.

A variety of security businesses have produced technical reports about the incidents, and they have actually been codenamed either Carbanak or Anunak and these reports listed indicators of compromise that were observed. The companies consist of:

Fox-IT of Holland
Group-IB of Russia
Kaspersky Lab of Russia

This post will serve as a case study for the cyber attacks and investigate:

1. The reason that the endpoint security and the standard network security was unable to discover and defend against the attacks?
2. Why continuous endpoint monitoring (as provided by the Ziften solution) would have alerted early about endpoint attacks then triggered a response to prevent data loss?

Traditional Endpoint Security And Network Security Is Inadequate

Based upon the legacy security design that relies too much on blocking and prevention, traditional endpoint and network security does not supply a well balanced strategy of obstructing, prevention, detection and response. It would not be tough for any cyber criminal to pre test their attacks on a limited number of standard endpoint security and network security services so that they could be sure an attack would not be found. A number of the hackers have in fact looked into the security products that remained in place at the victim organizations and then ended up being experienced in breaking through unnoticed. The cyber crooks understood that the majority of these security products just react after the occasion but otherwise will do nothing. What this means is that the regular endpoint operation remains primarily nontransparent to IT security workers, which means that malicious activity ends up being masked (this has already been checked by the hackers to prevent detection). After an initial breach has actually taken place, the harmful software can extend to reach users with greater privileges and the more delicate endpoints. This can be quickly attained by the theft of credentials, where no malware is required, and conventional IT tools (which have been white listed by the victim organization) can be utilized by cyber criminal created scripts. This means that the existence of malware that can be discovered at endpoints is not used and there will be no alarms raised. Standard endpoint security software is too over reliant on looking for malware.

Traditional network security can be controlled in a similar way. Hackers check their network activities first to avoid being identified by commonly distributed IDS/IPS guidelines, and they carefully monitor regular endpoint operation (on endpoints that have been jeopardized) to hide their activities on a network within regular transaction durations and normal network traffic patterns. A new command and control infrastructure is created that is not registered on network address blacklists, either at the IP or domain levels. There is very little to give the cyber criminals away here. Nevertheless, more astute network behavioral evaluation, particularly when related to the endpoint context which will be discussed later in this series of posts, can be a lot more reliable.

It is not time to give up hope. Would continuous endpoint monitoring (as supplied by Ziften) have provided an early caution of the endpoint hacking to begin the procedure of stopping the attacks and prevent data loss? Learn more in part 2.