Written By Michael Vaughn And Presented By Charles Leaver Ziften CEO
Answers To Your Concerns About WannaCry Ransomware
The WannaCry ransomware attack has infected more than 300,000 computer systems in 150 nations up until now by making use of vulnerabilities in Microsoft’s Windows operating system.
In this brief video Chief Data Scientist Dr. Al Hartmann and I talk about the nature of the attack, as well as how Ziften can assist organizations safeguard themselves from the vulnerability known as “EternalBlue.”.
As discussed in the video, the problem with this Server Message Block (SMB) file-sharing service is that it’s on many Windows operating systems and discovered in many environments. However, we make it simple to determine which systems in your environment have actually or haven’t been patched yet. Importantly, Ziften Zenith can also remotely disable the SMB file-sharing service totally, offering companies important time to make sure that those machines are effectively patched.
If you wonder about Ziften Zenith, our 20 minute demo consists of an assessment with our specialists around how we can help your organization avoid the worst digital disaster to strike the internet in years.
Written By Roark Pollock And Presented By Charles Leaver CEO Ziften
The Endpoint Security Purchaser’s Guide
The most typical point for an innovative relentless attack or a breach is the end point. And they are definitely the entry point for most ransomware and social engineering attacks. The use of endpoint protection services has long been thought about a best practice for securing endpoints. Sadly, those tools aren’t staying up to date with today’s danger environment. Advanced risks, and truth be told, even less advanced threats, are frequently more than adequate for tricking the average employee into clicking something they should not. So organizations are looking at and examining a wide variety of next generation end point security (NGES) solutions.
With this in mind, here are ten pointers to consider if you’re taking a look at NGES solutions.
Idea 1: Start with the end first
Don’t let the tail wag the dog. A threat reduction method ought to constantly start by evaluating problems then trying to find prospective solutions for those problems. But all frequently we get captivated with a “shiny” brand-new technology (e.g., the current silver bullet) and we wind up aiming to shoehorn that innovation into our environments without fully examining if it solves a comprehended and determined issue. So what problems are you trying to solve?
– Is your current endpoint security tool failing to stop risks?
– Do you need better visibility into activities at the endpoint?
– Are compliance requirements mandating constant endpoint monitoring?
– Are you trying to reduce the time and costs of incident response?
Specify the issues to address, and then you’ll have a measuring stick for success.
Pointer 2: Know your audience. Who will be utilizing the tool?
Understanding the issue that has to be fixed is a crucial first step in understanding who owns the issue and who would (operationally) own the service. Every functional team has its strengths, weaknesses, choices and biases. Define who will need to utilize the service, and others that could gain from its usage. It could be:
– Security team,
– IT operations,
– The governance, risk & compliance (GRC) group,
– Helpdesk or end user support team,
– Or perhaps the server team, or a cloud operations team?
Suggestion 3: Know exactly what you imply by end point
Another typically neglected early step in specifying the issue is specifying the endpoint. Yes, all of us used to know exactly what we implied when we stated end point but today endpoints are available in a lot more varieties than before.
Sure we want to safeguard desktops and laptops however how about mobile phones (e.g. smartphones and tablets), virtual endpoints, cloud based end points, or Internet of Things (IoT) devices? And how about your servers? All these devices, naturally, come in numerous flavors so platform support has to be attended to too (e.g. Windows only, Mac OSX, Linux, etc?). Likewise, consider assistance for endpoints even when they are working remote, or are working offline. Exactly what are your needs and exactly what are “great to haves?”
Tip 4: Start with a foundation of all the time visibility
Constant visibility is a foundational ability for addressing a host of security and functional management issues on the endpoint. The old expression holds true – that you can’t manage exactly what you cannot see or measure. Even more, you cannot secure exactly what you cannot appropriately manage. So it should start with continuous or all-the-time visibility.
Visibility is foundational to Security and Management
And think about what visibility suggests. Enterprises need a single source of fact that at a minimum monitors, saves, and examines the following:
– System data – events, logs, hardware state, and file system details
– User data – activity logs and habit patterns
– Application data – attributes of installed apps and use patterns
– Binary data – characteristics of set up binaries
– Processes data – tracking details and statistics
– Network connectivity data – statistics and internal habits of network activity on the host
Idea 5: Track your visibility data
Endpoint visibility data can be kept and examined on the premises, in the cloud, or some combination of both. There are advantages to each. The appropriate method differs, but is generally enforced by regulative requirements, internal privacy policies, the endpoints being monitored, and the total expense considerations.
Know if your organization needs on-premise data retention
Know whether your company enables cloud based data retention and analysis or if you are constrained to on-premise options only. Within Ziften, 20-30% of our clients keep data on-premise merely for regulatory factors. However, if lawfully an alternative, the cloud can offer expense benefits (among others).
Tip 6: Know what is on your network
Comprehending the issue you are trying to solve requires understanding the assets on the network. We have found that as many as 30% of the end points we initially discover on clients’ networks are unmanaged or unidentified devices. This obviously develops a big blind spot. Minimizing this blind spot is a critical best practice. In fact, SANS Critical Security Controls 1 and 2 are to perform a stock of licensed and unauthorized devices and software applications connected to your network. So search for NGES solutions that can fingerprint all linked devices, track software applications stock and utilization, and carry out ongoing continuous discovery.
Pointer 7: Know where you are exposed
After finding out what devices you need to watch, you need to make certain they are running in up to date configurations. SANS Critical Security Controls 3 recommends ensuring safe and secure setups monitoring for laptops, workstations, and servers. SANS Critical Security Controls 4 suggests making it possible for constant vulnerability assessment and removal of these devices. So, try to find NGES solutions that provide constant tracking of the state or posture of each device, and it’s even of more benefit if it can assist impose that posture.
Likewise search for solutions that deliver constant vulnerability evaluation and remediation.
Keeping your general endpoint environment hardened and devoid of crucial vulnerabilities prevents a substantial amount of security concerns and removes a lot of back end pressure on the IT and security operations groups.
Idea 8: Cultivate continuous detection and response
A crucial end goal for numerous NGES solutions is supporting continuous device state monitoring, to enable efficient threat or event response. SANS Critical Security Control 19 advises robust incident response and management as a best practice.
Look for NGES services that offer all the time or constant danger detection, which leverages a network of worldwide risk intelligence, and several detection methods (e.g., signature, behavioral, artificial intelligence, etc). And search for incident response solutions that help prioritize identified hazards and/or problems and provide workflow with contextual system, application, user, and network data. This can assist automate the appropriate response or next actions. Lastly, understand all the response actions that each service supports – and search for a service that provides remote access that is as close as possible to “sitting at the endpoint keyboard”.
Idea 9: Think about forensics data collection
In addition to incident response, organizations need to be prepared to attend to the need for forensic or historical data analysis. The SANS Critical Security Control 6 suggests the upkeep, tracking and analysis of all audit logs. Forensic analysis can take lots of kinds, however a foundation of historic end point tracking data will be essential to any examination. So try to find services that maintain historic data that permits:
– Forensic jobs include tracing lateral danger movement through the network gradually,
– Pinpointing data exfiltration efforts,
– Identifying source of breaches, and
– Figuring out suitable remediation actions.
Tip 10: Take down the walls
IBM’s security team, which supports an excellent ecosystem of security partners, estimates that the average business has 135 security tools in situ and is working with 40 security vendors. IBM customers certainly skew to large businesses but it’s a typical refrain (problem) from organizations of all sizes that security products don’t integrate well enough.
And the grievance is not simply that security services don’t play well with other security services, however also that they don’t constantly integrate well with system management, patch management, CMDB, NetFlow analytics, ticketing systems, and orchestration tools. Organizations have to think about these (and other) integration points along with the vendor’s desire to share raw data, not simply metadata, through an API.
Additional Suggestion 11: Plan for customizations
Here’s a bonus suggestion. Assume that you’ll want to tailor that glossy new NGES service quickly after you get it. No solution will fulfill all of your requirements right out of the box, in default configurations. Find out how the service supports:
– Custom data collection,
– Signaling and reporting with custom data,
– Customized scripting, or
– IFTTT (if this then that) performance.
You know you’ll want new paint or new wheels on that NGES solution quickly – so ensure it will support your future modification projects simply enough.
Look for support for simple personalizations in your NGES solution
Follow the bulk of these pointers and you’ll unquestionably prevent much of the common mistakes that plague others in their evaluations of NGES services.
Do you wish to manage and safeguard your endpoints, your data center, your network and the cloud? Well Ziften has the best solution for you. We gather data, and allow you to associate and utilize that data to make choices – and be in control over your enterprise.
The information that we receive from everyone on the network can make a real world distinction. Consider the inference that the U.S. elections in 2016 were affected by hackers in another country. If that’s the case, hackers can do almost anything – and the idea that we’ll choose that as the status quo is simply ridiculous.
At Ziften, we believe the best method to fight those risks is with greater visibility than you have actually ever had. That visibility crosses the whole business, and connects all the significant players together. On the back end, that’s real and virtual servers in the cloud and in the data center. That’s infrastructure and applications and containers. On the other side, it’s laptops and desktops, no matter where and how they are connected.
End to end – that’s the thinking behind everything at Ziften. From endpoint to cloud, right the way from a browser to a DNS server. We tie all that together, with all the other components to offer your service a total solution.
We also catch and save real time data for as much as 12 months to let you understand what’s occurring on the network right now, and offer historic pattern analysis and cautions if something changes.
That lets you spot IT faults and security problems right away, and also have the ability to ferret out the root causes by recalling in time to uncover where a breach or fault may have initially taken place. Active forensics are a total need in this business: After all, where a fault or breach triggered an alarm may not be the place where the issue started – or where a hacker is operating.
Ziften supplies your security and IT groups with the visibility to comprehend your present security posture, and recognize where improvements are required. Non-compliant endpoints? They will be found. Rogue devices? These will be discovered. Penetration off-network? This will be detected. Obsolete firmware? Unpatched applications? All discovered. We’ll not just assist you discover the issue, we’ll assist you repair it, and make sure it stays repaired.
End to end IT and security management. Real-time and historic active forensics. In the cloud, offline and onsite. Incident detection, containment and response. We have actually got it all covered. That’s exactly what makes Ziften so much better.
Written by Roark Pollock and Presented by Ziften CEO Charles Leaver
According to Gartner the public cloud services market exceeded $208 billion last year (2016). This represented about a 17% increase year over year. Pretty good considering the on-going issues most cloud consumers still have relating to data security. Another particularly intriguing Gartner finding is the typical practice by cloud consumers to contract services to several public cloud providers.
In accordance with Gartner “most businesses are currently utilizing a combination of cloud services from various cloud service providers”. While the business rationale for using several vendors is sound (e.g., preventing supplier lock in), the practice does develop extra complexity inmonitoring activity across an organization’s significantly fragmented IT landscape.
While some providers support more superior visibility than others (for example, AWS CloudTrail can monitor API calls throughout the AWS infrastructure) organizations have to comprehend and resolve the visibility issues associated with relocating to the cloud despite the cloud supplier or service providers they work with.
Unfortunately, the capability to track application and user activity, and networking communications from each VM or endpoint in the cloud is limited.
Irrespective of where computing resources live, companies must address the concerns of “Which users, machines, and applications are communicating with each other?” Organizations need visibility across the infrastructure so that they can:
Quickly recognize and prioritize problems
Speed root cause analysis and recognition
Lower the mean time to repair issues for end users
Quickly recognize and eliminate security risks, reducing general dwell times.
On the other hand, bad visibility or poor access to visibility data can decrease the effectiveness of existing management and security tools.
Companies that are used to the maturity, ease, and reasonably cheapness of monitoring physical data centers are apt to be dissatisfied with their public cloud options.
What has actually been missing is an easy, ubiquitous, and sophisticated solution like NetFlow for public cloud infrastructure.
NetFlow, of course, has had twenty years approximately to become a de facto standard for network visibility. A typical deployment involves the tracking of traffic and aggregation of flows where the network chokes, the collection and saving of flow data from multiple collection points, and the analysis of this flow data.
Flows include a basic set of source and destination IP addresses and port and protocol info that is typically gathered from a router or switch. Netflow data is relatively inexpensive and easy to gather and provides almost ubiquitous network visibility and enables actionable analysis for both network monitoring and performance management applications.
The majority of IT staffs, especially networking and some security groups are extremely comfy with the technology.
But NetFlow was developed for fixing what has become a rather restricted issue in the sense that it only gathers network information and does so at a minimal variety of possible locations.
To make better use of NetFlow, 2 essential modifications are essential.
NetFlow to the Edge: First, we have to expand the useful implementation scenarios for NetFlow. Instead of only gathering NetFlow at networking choke points, let’s broaden flow collection to the network edge (cloud, servers and clients). This would considerably expand the overall view that any NetFlow analytics supply.
This would permit organizations to augment and take advantage of existing NetFlow analytics tools to get rid of the ever increasing blind spot of visibility into public cloud activity.
Rich, contextual NetFlow: Second, we have to use NetFlow for more than basic network visibility.
Instead, let’s utilize an extended version of NetFlow and include data on the user, device,
application, and binary responsible for each monitored network connection. That would allow us to rapidly connect every network connection back to its source.
In fact, these two modifications to NetFlow, are precisely what Ziften has actually accomplished with ZFlow. ZFlow offers an expanded version of NetFlow that can be deployed at the network edge, also as part of a container or VM image, and the resulting info collection can be consumed and analyzed with existing NetFlow analysis tools. Over and above standard NetFlow Internet Protocol Flow Info eXport (IPFIX) visibility of the network, ZFlow provides greater visibility with the addition of info on user, device, application and binary for each network connection.
Eventually, this allows Ziften ZFlow to deliver end-to-end visibility in between any 2 endpoints, physical or virtual, removing standard blind spots like East West traffic in data centers and enterprise cloud deployments.