Is continuous vulnerability scanning essential?
Are there other approaches to vulnerability management that do not involve continuous scanning?
As data increasingly moves from on-prem to Public Cloud, we need a complete rethink about how we view and protect our critical databases. It is common for Cloud databases to be spun up, data ingested and then the database taken down again very quickly. In this situation its clear that continuous scanning to keep your database inventory up to date and vulnerabilities remediated is essential. An hourly, daily, weekly or monthly scan will not keep you updated on whats happening to your most precious resource... your data. However, this can only be achieved by using a product designed specifically for securing Cloud databases on a continuous cycle. Trying to re-purpose an on-prem tool to handle Cloud databases wont work. Ask an auditor if its ok to punch a hole in your VPC to allow your current database security tool to assess your security posture!! You can imagine the answer. Its also worth remembering that the Cloud Infrastructure is essentially handled by the Service Providers... so AWS, Microsoft, Google.... that's not where the problems will come from. The old days of keeping patches updated are largely gone with our move to the Public Cloud. Its far more likely that issues will appear from the Customers side of things. So continuous scanning for Inventory changes, for Vulnerabilities, for Misconfigurations... is absolutely essential in my view. If anyone is interested in more detail on this, we have written a short whitepaper describing the issues and solutions. You can find it on our website at www.secureclouddb.com
Yes, essential*. You can start your program, for example, based on "Internet Facing" assets first, "Stringent" secondary, after "Baseline" and for last "workstation".
If you have a "BCP" Continuity Program, another approach is to check "VBF" (Vital Business Function" assets or choose your Vuln Mgmt Program based on "Quick Win Goals".
There are several approaches, based on (critical) Products, Teams (agile-like), critical&high vulns, high business risks, and others.Silver Tip: I think that the more important tip is "start small and grow steadily".*Golden Tip: high frequency scans is useless if your team doesn't have correction speed and continuous flow. You will generate useless reports (and sometimes, causing discomfort with your operational/execution teams).
PS: printers, access points, confcall devices, corp cell phones, notebooks, APIs and several types of CMDB components need to participate in VMP too (when your maturity permits it).
I believe vulnerability scanning is usually a scheduled activity where you can vary the frequency of the scans according to your needs and impact on performance of the target resources. Regular scans ensure you discover any new vulnerabilities while measuring your progress in addressing/remediating previously highlighted weaknesses. Continuous scans may involve un-authenticated scans to which the alternative would be to use authenticated scans/probes that result in more accurate data or less false positives.
A continuous analysis guarantees that at least the good practices are respected.Before taking pleasure in stopping known and unknown threats I think we should be able to guarantee that at least what is known is stopped...
Crossing different types of automatic scanners is interesting.Human intervention is a plus, and it should not be one or the other but bothKnowing your attack surface is a good start...I think we all say the same thing :)
The vulnerability management consists of multiple phases, one of them is vulnerability posture acquisition (basically scanning for vulnerabilities). There are clear advantages in obtaining vulnerability information very frequently (i.e. almost continually) and this is best done with an agent-based solution.
That said, there is no point in doing continually scanning if the process cannot handle the data in the same cadence. For example, there should be the automation of Triage to categorise detected vulnerabilities immediately and march against VM policy to derive action needed.
Our best practice is to process vulnerabilities in our platform that can be configured with very granular policies. The key is, however, not to overload IT organisation with requests to fix a vulnerability. Use the trust capital carefully and only push for emergency fixes when the risk warrants it.
Because the Technology landscape is constantly changing, the Thread landscape is also constantly changing, and we as humans cant be perfect, continuous vulnerability scanning is a must
Hi security professionals,
As the majority of you have probably heard, GoDaddy has been hacked again a few days ago.
Based on what is already known, what has been done wrong and what can be done better?
Share your thoughts!
In the past vulnerability assessment has been the primary approach used to detect cyber threats.
Risk-based vulnerability management has become increasingly popular.
How do each of these approaches work, and which do you think is more effective?