Written by Charles Leaver Ziften CEO
If your business computing environment is not correctly managed there is no chance that it can be totally protected. And you can’t efficiently manage those complex business systems unless there’s a strong feeling that they are protected.
Some might call this a chicken-and-egg situation, where you have no idea where to begin. Should you begin with security? Or should you start with the management of your system? That’s the incorrect technique. Consider this instead like Reese’s Peanut Butter Cups: It’s not chocolate first. It’s not peanut butter initially. Instead, both are mixed together – and treated as a single scrumptious treat.
Many organizations, I would argue a lot of organizations, are structured with an IT management department reporting to a CIO, and with a security management group reporting to a CISO. The CIO group and the CISO group do not know each other, speak to each other just when absolutely essential, have unique budget plans, definitely have separate concerns, check out different reports, and make use of various management platforms. On a day-to-day basis, what makes up a job, an issue or an alert for one team flies completely under the other team’s radar.
That’s bad, because both the IT and security groups need to make assumptions. The IT team believes that everything is secure, unless someone notifies them otherwise. For instance, they assume that devices and applications have not been jeopardized, users have actually not intensified their privileges, and so-on. Likewise, the security team assumes that the servers, desktops, and mobiles are working properly, operating systems and applications fully updated, patches have been used, and so on
Since the CIO and CISO teams aren’t talking with each other, do not comprehend each others’ functions and priorities, and aren’t using the exact same tools, those assumptions might not be appropriate.
And again, you cannot have a secure environment unless that environment is properly managed – and you cannot manage that environment unless it’s protected. Or to put it another way: An unsecure environment makes anything you do in the IT organization suspect and irrelevant, and indicates that you can’t know whether the details you are seeing are correct or manipulated. It may all be phony news.
How to Bridge the IT / Security Space
The best ways to bridge that gap? It sounds easy however it can be challenging: Make sure that there is an umbrella covering both the IT and security teams. Both IT and security report to the very same individual or organization somewhere. It might be the CIO, it might be the CFO, it might be the CEO. For the sake of argument here, let’s say it’s the CFO.
If the business doesn’t have a secure environment, and there’s a breach, the value of the brand and the company can be reduced to nothing. Similarly, if the users, devices, infrastructure, application, and data aren’t well-managed, the business cannot work efficiently, and the value drops. As we have actually talked about, if it’s not well managed, it can’t be secured, and if it’s not protected, it cannot be well handled.
The fiduciary responsibility of senior executives (like the CFO) is to secure the worth of organizational assets, and that indicates making sure IT and security talk to each other, comprehend each other’s priorities, and if possible, can see the same reports and data – filtered and shown to be significant to their specific areas of duty.
That’s the thinking that we adopted with the design of our Zenith platform. It’s not a security management tool with IT capabilities, and it’s not an IT management tool with security abilities. No, it’s a Peanut Butter Cup, created similarly around chocolate and peanut butter. To be less confectionery, Zenith is an umbrella that gives IT teams exactly what they need to do their jobs, and gives security groups what they require as well – without coverage spaces that might undermine assumptions about the state of enterprise security and IT management.
We need to guarantee that our organization’s IT infrastructure is developed on a safe and secure structure – and that our security is executed on a well managed base of hardware, infrastructure, software applications and users. We can’t run at peak performance, and with full fiduciary duty, otherwise.
Written By Roark Pollock And Presented By Charles Leaver Ziften CEO
A survey just recently finished by Gallup found that 43% of Americans that were in employment worked from another location for a few of their work time in 2016. Gallup, who has actually been surveying telecommuting patterns in the USA for practically a 10 years, continues to see more staff members working outside of conventional workplaces and an increasing number of them doing so for a greater number of days from the week. And, naturally the variety of connected devices that the average employee uses has actually jumped too, which assists drive the benefit and preference of working far from the workplace.
This freedom certainly makes for better employees, and it is hoped more productive employees, however the issues that these patterns present for both security and systems operations groups ought to not be dismissed. IT asset discovery, IT systems management, and threat detection and response functions all benefit from real-time and historic visibility into user, device, application, and network connection activity. And to be truly reliable, endpoint visibility and monitoring should work no matter where the user and device are operating, be it on the network (regional), off the network however linked (remotely), or disconnected (not online). Current remote working patterns are significantly leaving security and functional groups blind to potential issues and risks.
The mainstreaming of these patterns makes it a lot more difficult for IT and security groups to limit what was before deemed greater risk user behavior, such as working from a coffeehouse. But that ship has actually sailed and today systems management and security teams need to be able to thoroughly track user, device, application, and network activity, find abnormalities and unsuitable actions, and impose suitable action or fixes regardless of whether an endpoint is locally linked, remotely connected, or disconnected.
In addition, the fact that numerous employees now routinely access cloud-based applications and assets, and have back-up USB or network connected storage (NAS) drives at their homes additionally amplifies the need for endpoint visibility. Endpoint controls often supply the one and only record of remote activity that no longer always ends in the corporate network. Offline activity presents the most extreme example of the need for constant endpoint monitoring. Clearly network controls or network tracking are of little use when a device is running offline. The setup of a proper endpoint agent is crucial to guarantee the capture of all important security and system data.
As an example of the kinds of offline activity that could be found, a client was recently able to track, flag, and report unusual behavior on a business laptop. A high level executive moved huge quantities of endpoint data to an unauthorized USB drive while the device was offline. Due to the fact that the endpoint agent had the ability to collect this behavioral data during this offline duration, the customer had the ability to see this unusual action and follow up properly. Continuing to monitor the device, applications, and user habits even when the endpoint was disconnected, gave the client visibility they never had previously.
Does your organization have continuous tracking and visibility when employee endpoints are not connected? If so, how do you achieve this?
Written By Roark Pollock And Presented By Ziften CEO Charles Leaver
If you are a student of history you will observe numerous examples of serious unexpected consequences when new technology has actually been presented. It typically surprises individuals that new technologies might have dubious intentions in addition to the positive purposes for which they are brought to market but it takes place all the time.
For instance, Train robbers using dynamite (“You believe you utilized enough Dynamite there, Butch?”) or spammers using email. Just recently the use of SSL to conceal malware from security controls has actually ended up being more common just because the genuine use of SSL has actually made this technique better.
Since new technology is typically appropriated by bad actors, we have no need to believe this will not hold true about the new generation of machine learning tools that have actually reached the market.
To what degree will there be misuse of these tools? There are most likely a couple of ways in which assailants could utilize machine learning to their advantage. At a minimum, malware authors will evaluate their new malware against the new class of sophisticated hazard security products in a quest to modify their code to make sure that it is less likely to be flagged as destructive. The efficiency of protective security controls constantly has a half life due to adversarial learning. An understanding of artificial intelligence defenses will help hackers become more proactive in reducing the effectiveness of artificial intelligence based defenses. An example would be an attacker flooding a network with fake traffic with the intention of “poisoning” the machine learning model being constructed from that traffic. The objective of the opponent would be to trick the defender’s machine learning tool into misclassifying traffic or to develop such a high level of false positives that the defenders would dial back the fidelity of the alerts.
Artificial intelligence will likely also be used as an attack tool by attackers. For example, some researchers forecast that opponents will use artificial intelligence methods to sharpen their social engineering attacks (e.g., spear phishing). The automation of the effort it takes to tailor a social engineering attack is especially unpleasant provided the effectiveness of spear phishing. The capability to automate mass modification of these attacks is a potent financial reward for hackers to embrace the techniques.
Expect the kind of breaches that deliver ransomware payloads to rise greatly in 2017.
The need to automate jobs is a major motivation of investment choices for both aggressors and defenders. Artificial intelligence promises to automate detection and response and increase the functional pace. While the innovation will progressively end up being a standard component of defense in depth strategies, it is not a magic bullet. It needs to be understood that hackers are actively dealing with evasion approaches around machine learning based detection solutions while also using machine learning for their own attack functions. This arms race will need protectors to significantly accomplish incident response at machine speed, additionally worsening the need for automated incident response abilities.
Written By Josh Harriman And Presented By Charles Leaver Ziften CEO
The repetition of a theme when it comes to computer system security is never a bad thing. As advanced as some cyber attacks can be, you really have to watch for and understand using typical readily offered tools in your environment. These tools are typically utilized by your IT staff and probably would be white listed for use and can be missed out on by security groups mining through all the appropriate applications that ‘might’ be performed on an endpoint.
Once somebody has breached your network, which can be done in a range of ways and another blog post for another day, signs of these programs/tools running in your environment should be examined to make sure proper usage.
A few tools/commands and their features:
Netstat – Information on the present connections on the system. This may be used to determine other systems within the network.
Powershell – Integrated Windows command line utility and can carry out a variety of actions such as obtaining crucial info about the system, eliminating procedures, adding files or deleting files and so on
WMI – Another powerful built-in Windows utility. Can move files around and gather essential system info.
Route Print – Command to see the local routing table.
Net – Including domains/groups/users/accounts.
RDP (Remote Desktop Protocol) – Program to access systems remotely.
AT – Arranged jobs.
Looking for activity from these tools can consume a lot of time and sometimes be frustrating, but is essential to deal with who might be shuffling around in your environment. And not just what is taking place in real-time, but in the past also to see a course someone may have taken through the network. It’s frequently not ‘patient zero’ that is the target, once they get a foothold, they could utilize these commands and tools to start their reconnaissance and lastly shift to a high worth asset. It’s that lateral movement that you would like to find.
You need to have the capability to collect the info discussed above and the means to sort through to discover, alert, and investigate this data. You can utilize Windows Events to monitor different modifications on a device and after that filter that down.
Looking at some screen shots below from our Ziften console, you can see a quick difference in between exactly what our IT group utilized to push out changes in the network, versus someone running a very comparable command themselves. This may be similar to what you find when someone did that remotely say by means of an RDP session.
An intriguing side note in these screenshots is that in all scenarios, the Process Status is ‘Terminated’. You would not see this detail throughout a live examination or if you were not constantly collecting the data. However given that we are gathering all the information continually, you have this historic data to look at. If in case you were seeing the Status as ‘Running’, this could show that someone is live on that system as of now.
This only scratches the surface of what you should be collecting and how to analyze what is correct for your environment, which naturally will be different than that of others. But it’s a start. Destructive actors with intent to do you harm will normally try to find the path of least resistance. Why try and produce new and intriguing tools, when a lot of what they need is currently there and all set to go.
Written By Roark Pollock And Presented By Ziften CEO Charles Leaver
There might be a joke somewhere regarding the forensic analyst that was late to the incident response party. There is the seed of a joke in the concept at least however of course, you have to comprehend the differences between incident response and forensic analysis to appreciate the capacity for humor.
Incident response and forensic analysis are associated disciplines that can leverage similar tools and related data sets however also have some crucial differences. There are four especially essential differences between incident response and forensic analysis:
– Requirements for data.
– Group skills.
The distinction in the goals of incident response and forensic analysis is possibly the most essential. Incident response is concentrated on determining a fast (i.e., near real time) reaction to an instant threat or issue. For example, a home is on fire and the firemen that show up to put that fire out are associated with incident response. Forensic analysis is usually performed as part of an arranged compliance, legal discovery, or law enforcement examination. For instance, a fire detective might examine the remains of that home fire to figure out the overall damage to the property, the cause of the fire, and whether the source was such that other houses are likewise at risk. To puts it simply, incident response is focused on containment of a risk or concern, while forensic analysis is concentrated on a complete understanding and extensive removal of a breach.
A 2nd major difference between the disciplines is the data resources required to achieve the objectives. Incident response teams normally just need short term data sources, frequently no greater than a month or thereabouts, while forensic analysis teams normally require much longer lived logs and files. Remember that the average dwell time of a successful attack is somewhere between 150 and 300 days.
While there is commonness in the workers skills of forensic analysis and incident response teams, and in fact incident response is often considered a subset of the border forensic discipline, there are important distinctions in task requirements. Both types of research need strong log analysis and malware analysis abilities. Incident response requires the capability to rapidly separate a contaminated device and to establish methods to remediate or quarantine the device. Interactions tend to be with other security and operations staff member. Forensic analysis typically requires interactions with a much more comprehensive set of departments, including legal, compliance, operations and HR.
Not remarkably, the perceived benefits of these activities also differ.
The capability to eliminate a risk on one device in near real time is a major determinate in keeping breaches isolated and limited in impact. Incident response, and proactive hazard searching, is first line of defense in security operations. Forensic analysis is incident responses’ less glamorous relative. Nevertheless, the advantages of this work are undeniable. A thorough forensic examination enables the removal of all dangers with the cautious analysis of an entire attack chain of events. And that is nothing to laugh about.
Do your endpoint security procedures make provision for both immediate incident response, and long term historical forensic analysis?
Written By Jesse Sampson And Presented By Charles Leaver CEO Ziften
Why are the same techniques being utilized by assailants all of the time? The simple answer is that they continue to work. For instance, Cisco’s 2017 Cybersecurity Report tells us that after years of decline, spam email with destructive attachments is once again increasing. In that conventional attack vector, malware authors typically conceal their activities by utilizing a filename similar to a typical system process.
There is not always a connection with a file’s path name and its contents: anybody who has actually aimed to hide sensitive details by offering it a boring name like “taxes”, or changed the extension of a file attachment to get around e-mail rules knows this principle. Malware creators understand this as well, and will frequently name malware to resemble common system procedures. For instance, “explore.exe” is Internet Explorer, but “explorer.exe” with an additional “r” may be anything. It’s easy even for specialists to neglect this small distinction.
The opposite problem, known.exe files running in unusual places, is simple to resolve, utilizing string functions and SQL sets.
How about the other scenario, finding close matches to the executable name? The majority of people start their hunt for near string matches by sorting data and visually searching for disparities. This usually works effectively for a small set of data, perhaps even a single system. To find these patterns at scale, nevertheless, requires an algorithmic technique. One recognized technique for “fuzzy matching” is to use Edit Distance.
Exactly what’s the very best method to computing edit distance? For Ziften, our technology stack consists of HP Vertica, making this task easy. The web is full of data scientists and data engineers singing Vertica’s praises, so it will suffice to point out that Vertica makes it simple to develop custom-made functions that take full advantage of its power – from C++ power tools, to statistical modeling scalpels in R and Java.
This Git repo is maintained by Vertica enthusiasts operating in industry. It’s not a certified offering, but the Vertica group is definitely aware of it, and additionally is believing everyday about the best ways to make Vertica more useful for data scientists – a great space to view. Best of all, it contains a function to determine edit distance! There are likewise alternative tools for natural language processing here like word stemmers and tokenizers.
By utilizing edit distance on the top executable paths, we can rapidly find the closest match to each of our leading hits. This is an intriguing data set as we can sort by distance to discover the closest matches over the whole dataset, or we can sort by frequency of the top path to see what is the nearest match to our commonly used procedures. This data can also appear on contextual “report card” pages, to show, e.g. the leading five nearest strings for a provided path. Below is a toy example to give a sense of usage, based on genuine data ZiftenLabs observed in a client environment.
Setting a threshold of 0.2 seems to discover excellent results in our experience, however the point is that these can be edited to fit specific usage cases. Did we discover any malware? We notice that “teamviewer_.exe” (needs to be just “teamviewer.exe”), “iexplorer.exe” (must be “iexplore.exe”), and “cvshost.exe” (needs to be svchost.exe, unless maybe you work for CVS drug store…) all look odd. Since we’re currently in our database, it’s also unimportant to get the associated MD5 hashes, Ziften suspicion ratings, and other attributes to do a much deeper dive.
In this specific real life environment, it ended up that teamviewer_.exe and iexplorer.exe were portable applications, not familiar malware. We helped the client with further examination on the user and system where we observed the portable applications because use of portable apps on a USB drive might be proof of suspicious activity. The more troubling find was cvshost.exe. Ziften’s intelligence feeds suggest that this is a suspect file. Searching for the md5 hash for this file on VirusTotal validates the Ziften data, suggesting that this is a possibly severe Trojan virus that may be a component of a botnet or doing something even more malicious. Once the malware was discovered, however, it was easy to solve the problem and make certain it remains solved utilizing Ziften’s ability to eliminate and persistently obstruct procedures by MD5 hash.
Even as we establish innovative predictive analytics to spot destructive patterns, it is necessary that we continue to enhance our abilities to hunt for recognized patterns and old tricks. Just because brand-new dangers emerge does not indicate the old ones disappear!
If you liked this post, keep looking here for the second part of this series where we will use this approach to hostnames to find malware droppers and other destructive websites.
Written By Roark Pollock And Presented By Ziften CEO Charles Leaver
In the very recent past everybody understood what you suggested if you raised the issue of an endpoint. If someone wished to offer you an endpoint security product, you understood exactly what devices that software application was going to secure. However when I hear someone casually talk about endpoints today, The Princess Bride’s Inigo Montoya comes to mind: “You keep using that word. I don’t believe it implies what you think it indicates.” Today an endpoint could be almost any type of device.
In truth, endpoints are so varied these days that people have actually taken to calling them “things.” In accordance with Gartner at the end of 2016 there were more than six billion “things” connected to the web. The consulting company predicts that this number will increase to twenty one billion by the year 2020. The business uses of these things will be both generic (e.g. linked light bulbs and Heating and Cooling systems) and industry particular (e.g. oil well safety tracking). For IT and security groups responsible for connecting and safeguarding endpoints, this is just half of the new obstacle, nevertheless. The welcoming of virtualization technology has redefined what an endpoint is, even in environments in which these groups have actually traditionally run.
The previous ten years has actually seen an enormous change in the method end users gain access to details. Physical devices continue to become more mobile with many information workers now doing the majority of their computing and interaction on laptop computers and cellphones. More importantly, everybody is becoming an information worker. Today, better instrumentation and tracking has actually enabled levels of data collection and analysis that can make the insertion of info-tech into almost any job lucrative.
At the same time, more conventional IT assets, particularly servers, are becoming virtualized to eliminate some of the standard restrictions in actually having those assets connected to physical devices.
These two trends together will affect security teams in essential ways. The totality of “endpoints” will include billions of long-lived and unsecure IoT endpoints in addition to billions of virtual endpoint instances that will be scaled up and down on demand in addition to moved to different physical places as needed.
Organizations will have extremely different concerns with these 2 general kinds of endpoints. Over their life times, IoT devices will need to be secured from a host of risks some of which have yet to be thought up. Tracking and securing these devices will need sophisticated detection abilities. On the positive side, it will be possible to maintain distinct log data to allow forensic examination.
Virtual endpoints, on the other hand, present their own important concerns. The capability to move their physical location makes it a lot more challenging to guarantee proper security policies are constantly attached to the endpoint. The practice of reimaging virtual endpoints can make forensic investigation tough, as essential data is usually lost when a new image is applied.
So it doesn’t matter what word or phrases are utilized to describe your endpoints – endpoint, systems, client device, user device, mobile phone, server, virtual device, container, cloud workload, IoT device, etc. – it is important to comprehend precisely what somebody means when they use the term endpoint.
Written By Dr Al Hartmann And Presented By Charles Leaver CEO Ziften
If Avoidance Has Failed Then Detection Is Important
The final scene in the well known Vietnam War film Platoon illustrates a North Vietnamese Army regiment in a surprise night time attack breaching the concertina wire perimeter of an American Army battalion, overrunning it, and slaughtering the stunned defenders. The desperate company commander, grasping their alarming protective predicament, orders his air support to strike his own position: “For the record, it’s my call – Dispose whatever you have actually got left on my position!” Minutes later the battleground is immolated in a napalm hellscape.
Although physical dispute, this highlights two elements of cyber security (1) You have to deal with unavoidable perimeter breaches, and (2) It can be absolute hell if you do not spot early and react powerfully. MITRE Corporation has actually been leading the call for re-balancing cyber security priorities to put due emphasis on detecting breaches in the network interior instead of simply concentrating on penetration avoidance at the network border. Rather than defense in depth, the latter produces a flawed “tootsie pop” defense – hard, crunchy shell, soft chewy center. Writing in a MITRE blog, “We might see that it wouldn’t be a question of if your network would be breached however when it would be breached,” explains Gary Gagnon, MITRE’s senior vice president, director of cybersecurity, and primary security officer. “Today, organizations are asking ‘For how long have the intruders been inside? How far have they gone?'”.
Some call this the “presumed breach” technique to cybersecurity, or as posted to Twitter by F-Secure’s Chief Research study Officer:.
Q: How many of the Fortune 500 are compromised – A: 500.
This is based upon the probability that any sufficiently complicated cyber environment has an existing compromise, and that Fortune 500 businesses are of magnificently intricate scale.
Shift the Problem of Perfect Execution from the Protectors to the Hackers.
The traditional cyber security viewpoint, derived from the tradition perimeter defense model, has been that the enemy only has to be right one time, while the protector needs to be right each time. An adequately resourced and relentless hacker will ultimately achieve penetration. And time to effective penetration decreases with increasing size and intricacy of the target enterprise.
A border or prevention reliant cyber-defense design essentially requires perfect execution by the defender, while delivering success to any sufficiently continual attack – a plan for particular cyber catastrophe. For instance, a leading cyber security red team reports successful enterprise penetration in under three hours in greater than 90% of their customer engagements – and these white hats are restricted to ethical methods. Your enterprise’s black hat assailants are not so constrained.
To be practical, the cyber defense technique must turn the tables on the attackers, shifting to them the unachievable burden of best execution. That is the rationale for a strong detection ability that continually monitors endpoint and network behavior for any unusual signs or observed hacker footprints inside the border. The more delicate the detection capability, the more care and stealth the hackers should work out in committing their kill chain series, and the more time and labor and skill they must invest. The protectors require but observe a single hacker tramp to uncover their foot tracks and loosen up the attack kill chain. Now the protectors become the hunter, the assailants the hunted.
The MITRE ATT&CK Model.
MITRE provides a detailed taxonomy of hacker footprints, covering the post-compromise segment of the kill chain, understood by the acronym ATT&CK, for Adversarial Tactics, Techniques, and Common Knowledge. ATT&CK project group leader Blake Strom says, “We chose to focus on the post attack duration [part of kill chain lined in orange below], not only because of the strong possibility of a breach and the scarcity of actionable information, but likewise because of the many chances and intervention points readily available for efficient protective action that do not always depend on anticipation of adversary tools.”
As displayed in the MITRE figure above, the ATT&CK design provides extra granularity on the attack kill chain post-compromise stages, breaking these out into ten tactic categories as revealed. Each tactic category is further detailed into a list of strategies an opponent might employ in performing that strategy. The January 2017 design upgrade of the ATT&CK matrix lists 127 strategies throughout its ten tactic categories. For instance, Computer registry Run Keys/ Start Folder is a strategy in the Determination category, Brute Force is a technique in the Qualifications category, and Command-Line Interface is a strategy in the Execution category.
Leveraging Endpoint Detection and Response (EDR) in the ATT&CK Design.
Endpoint Detection and Response (EDR) products, such as Ziften provides, use critical visibility into attacker use of techniques listed in the ATT&CK design. For example, PC registry Run Keys/ Start Folder method usage is reported, as is Command-Line Interface use, because these both include easily observable endpoint behavior. Strength use in the Qualifications category should be blocked by design in each authentication architecture and be viewable from the resulting account lockout. But even here the EDR product can report occasions such as unsuccessful login attempts, where an enemy might have a couple of guesses to attempt this, while remaining under the account lockout attempt threshold.
For attentive defenders, any technique use might be the attack giveaway that unwinds the whole kill chain. EDR solutions compete based on their method observation, reporting, and alerting capabilities, as well as their analytics capability to perform more of the attack pattern detection and kill chain reconstruction, in support of safeguarding security analysts staffing the enterprise SOC. Here at Ziften we will describe more of EDR solution abilities in support of the ATT&CK post-compromise detection design in future blogs in this series.
Written By Michael Vaughan And Presented By Charles Leaver Ziften CEO
More customized options are required by security, network and functional teams in 2017
Much of us have gone to security conventions over the years, however none bring the very same high level of excitement as RSA – where security is talked about by the world. Of all the conventions I have actually attended and worked, nothing comes close the passion for new technology individuals showed this past week in downtown San Francisco.
After taking a few days to absorb the dozens of conversations about the requirements and constraints with existing security tech, Ihave actually had the ability to synthesize a singular theme amongstattendees: People want personalized solutions that match their environment and will work across numerous internal groups.
When I refer to the term “individuals,” I mean everybody in attendance no matter technological segment. Operational experts, security professionals, network veterans, and even user behavior experts often visited the Ziften cubicle and shared their stories with us.
Everybody seemed more prepared than ever to discuss their needs and wants for their environment. These participants had their own set of goals they wanted to attain within their department and they were hungry for answers. Because the Ziften Zenith service supplies such broad visibility on business devices, it’s not surprising that our cubicle stayed crowded with individuals eager to get more information about a brand-new, refreshingly simple endpoint security innovation.
Guests featured complaints about myriad enterprise centric security concerns and sought much deeper insight into what’s really taking place on their network and on devices taking a trip in and out of the workplace.
End users of old-school security solutions are on the hunt for a newer, more essential software applications.
If I could choose simply one of the frequent concerns I received at RSA to share, it’s this one:
” What exactly is endpoint discovery?”
1) Endpoint discovery: Ziften exposes a historic view of unmanaged devices which have been linked to other enterprise endpoints at some point in time. Ziften enables users to discover known
and unknown entities which are active or have actually been interactive with recognized endpoints.
a. Unmanaged Asset Discovery: Ziften uses our extension platform to reveal these unknown entities running on the network.
b. Extensions: These are custom fit solutions customized to the user’s specific wants and
needs. The Ziften Zenith agent can run the designated extension on a single occasion, on a schedule or on a continuous basis.
Usually after the above description came the genuine factor they were participating in:
Individuals are searching for a vast array of services for numerous departments, including executives. This is where operating at Ziften makes addressing this question a treat.
Just a part of the RSA attendees are security experts. I spoke with lots of network, operation, endpoint management, vice presidents, basic managers and channel partners.
They clearly all use and understand the requirement for quality security software applications however seemingly find the translation to company worth missing out among security suppliers.
NetworkWorld’s Charles Araujo phrased the problem quite well in a post last week:
Organizations needs to likewise rationalize security data in an organization context and manage it holistically as part of the total IT and service operating design. A group of suppliers is likewise attempting to tackle this obstacle …
Ziften was amongst just 3 businesses mentioned.
After listening to those wants and needs of people from numerous business critical backgrounds and explaining to them the abilities of Ziften’s Extension platform, I normally explained how Ziften would modulate an extension to solve their need, or I gave them a brief demo of an extension that would allow them to overcome an obstacle.
2) Extension Platform: Customized, actionable options.
a. SKO Silos: Extensions based on fit and requirement (operations, network, endpoint, etc).
b. Customized Requests: Need something you do not see? We can repair that for you.
3) Enhanced Forensics:
a. Security: Risk management, Danger Evaluation, Vulnerabilities, Metadata that is suspicious.
b. Operations: Compliance, License Justification, Unmanaged Assets.
c. Network: Ingress/Egress IP movement, Domains, Volume metadata.
4) Visibility within the network– Not simply exactly what goes in and goes out.
a. ZFlow: Lastly see the network traffic inside your business.
Needless to say, everyone I spoke with in our booth quickly grasped the vital significance of having a tool such as Ziften Zenith running in and throughout their business.
Forbes writer, Jason Bloomberg, stated it best when he just recently described the future of business security software applications and how all indications point towards Ziften leading the way:
Perhaps the broadest interruption: suppliers are enhancing their ability to comprehend how bad actors act, and can thus take actions to prevent, find or mitigate their malicious activities. In particular, today’s vendors understand the ‘Cyber Kill Chain’ – the steps an experienced, patient hacker (understood in the biz as an innovative consistent danger, or APT) will require to accomplish his or her nefarious goals.
The product of U.S. Defense contractor Lockheed Martin, The Cyber Kill Chain consists of seven links: reconnaissance, weaponization, delivery, exploitation, installation, establishing command and control, and actions on objectives.
Today’s more innovative vendors target one or more of these links, with the goal of avoiding, finding or reducing the attack. 5 vendors at RSA stood apart in this classification.
Ziften uses an agent-based approach to tracking the behavior of users, devices, applications, and
network components, both in real time in addition to throughout historic data.
In real-time, experts utilize Ziften for risk identification and avoidance, while they use the historical data to uncover steps in the kill chain for mitigation and forensic functions.
Written By Dr Al Hartmann And Presented By Ziften CEO Charles Leaver
Return to Basics With Hygiene And Avoid Serious Problems
When you were a kid you will have been taught that brushing your teeth effectively and flossing will avoid the need for costly crowns and root canal procedures. Fundamental hygiene is way much easier and far less expensive than neglect and disease. This same lesson is applicable in the world of enterprise IT – we can run a sound operation with appropriate endpoint and network health, or we can deal with mounting security issues and dreadful data breaches as lax health extracts its burdensome toll.
Functional and Security Issues Overlap
Endpoint Detection and Response (EDR) tools like those we develop here at Ziften supply analytic insight into system operation throughout the enterprise endpoint population. They also offer endpoint derived network operation insights that substantially broaden on wire visibility alone and extend into cloud and virtual environments. These insights benefit both operations and security teams in considerable ways, provided the substantial overlap between functional and security concerns:
On the security side, EDR tools offer important situational awareness for event response. On the operational side, EDR tools provide important endpoint visibility for operational control. Critical situational awareness requires a baseline comprehension of endpoint population running norms, which understanding facilitates appropriate operational control.
Another way to express these interdependencies is:
You cannot secure what you do not manage.
You cannot manage what you don’t measure.
You cannot measure what you do not track.
Managing, measuring, and tracking has as much to do with the security function as with the operational role, do not aim to divide the child. Management indicates adherence to policy, that adherence must be measured, and operational measurements constitute a time series that must be monitored. A couple of sparse measurements of important dynamic time series lacks interpretive context.
Tight security does not make up for lax management, nor does tight management make up for ineffective security. [Read that again for focus.] Objective execution imbalances here result in unsustainable inadequacies and scale challenges that inevitably lead to major security breaches and functional deficiencies.
Substantial overlaps between operational and security issues include:
Configuration hardening and standard images
Application control and cloud management
Management of the network including segmentation
Data security and file encryption
Management of assets and device restore
Mobile device management
Backup and data restore
Patch and vulnerability management
Staff member consistent training for cyber awareness
For instance, asset management and device restore along with backup and data restoration are likely functional group obligations, but they end up being significant security problems when ransomware sweeps the enterprise, bricking all devices (not simply the usual endpoints, but any network attached devices such as printers, badge readers, security cameras, network routers, medical imaging devices, industrial control systems, etc.). Exactly what would your business response time be to reflash and refresh all device images from scratch and restore their data? Or is your contingency strategy to without delay pack the opponents’ Bitcoin wallets and hope they have not exfiltrated your data for additional extortion and money making. And why would you offload your data restore duty to a criminal syndicate, blindly relying on their perfect data restoration stability – makes definitely no sense. Operational control duty rests with the business, not with the attackers, and should not be shirked – shoulder your duty!
For another example, basic image building utilizing best practices setup hardening is clearly a joint duty of operations and security staff. In contrast to inefficient signature based endpoint protection platforms (EPP), which all big enterprise breach victims have long had in place, configuration hardening works, so bake it in and continually revitalize it. Likewise think about the requirements of enterprise staff whose job function needs opening of unsolicited email attachments, such as resumes, billings, legal notices, or other needed documents. This must be performed in a cloistered virtual sandbox environment, not on your production endpoints. Security personnel will make these decisions, but operations personnel will be imaging the endpoints and supporting the employees. These are shared obligations.
Use a safe environment to detonate. Don’t utilize production endpoints for opening unsolicited but necessary e-mail documents, like resumes, billings, legal notifications, and so on
Focus Limited Security Resources on the Tasks Just They Can Perform
The majority of big enterprises are challenged to successfully staff all their security functions. Left unaddressed, deficiencies in functional effectiveness will burn out security staff so quickly that security functions will constantly be understaffed. There will not sufficient fingers on your security team to jam in the increasing holes in the security dike that lax or inattentive endpoint or network or database management produces. And it will be less tough to staff operational functions than to staff security roles with gifted analysts.
Offload regular formulaic activities to operations staff. Concentrate limited security resources on the jobs just they can perform:
Staffing of the Security Operations Center (SOC)
Preventative penetration screening and red teaming
Reactive event response and forensics
Proactive attack hunting (both insider and external).
Security oversight of overlapping functional roles (ensure current security mindset).
Security policy advancement and stake holder buy-in.
Security architecture/tools/methodology design, choice, and advancement.
Implement disciplined operations management and focus minimal security resources on important security roles. Then your enterprise may prevent letting operations concerns fester into security problems.