.com
Top 5 Reasons Why a Security Vulnerability Management Program is Bound to Fail
Implementing and executing an enterprise security vulnerability management program is hard. Success is difficult to achieve and could be waylaid in any process of the program. The top 5 reasons why a security vulnerability management program is bound to fail are described below. Addressing these early will help ensure a more successful program.
1) Not having a complete, comprehensive enterprise inventory
If the asset inventory process to establish and manage IT assets is weak, the confidence that the vulnerability management program addresses all the IT assets in the enterprise will be low. For example, when the list of networks is incomplete, the vulnerability scanner will not have full coverage. This #FAIL will leave gaps in knowledge and potentially unaddressed security vulnerabilities with a direct network security impact. For a large global enterprise, the full list of networks is sometimes difficult to completely define. A network discovery tool like HP OpenView Network Node Manager, SolarWinds Network Engineer’s Toolkit or Lumeta IPsonar can provide a comprehensive list of networks within the enterprise. If you are good with FLOSS and have a do-it-yourself attitude, nmap or other free network discovery tools can be used. These tools can be found on the Backtrack Network Mapping menu as well.
When the list of operating systems installed in the enterprise is incomplete, the depth of the vulnerability scanner checks may be limited. It is recommended to configure the vulnerability scanner to only check for vulnerabilities in the known installed operating systems to reduce false positives and decrease scan time. However, when certain operating systems are eliminated from the configuration, but they actually exist, full coverage of potential vulnerability examination will not occur. This situation is frequently encountered when IT management is decentralized and when networks are configured to allow anything to be plugged in. It is very common to encounter non-standard “appliances” such as printers, environmental monitors, conference room scheduling displays, etc… that actually run an embedded operating system that are frequently vulnerable and rarely patched. One potential solution to these un-owned devices is to put them into a segmented network jail that has good network perimeter security to monitor those devices and protect the rest of the network.
On the application side, not documenting the comprehensive list of applications and versions will delay the patching and remediation process. Instead of having a proactive threat and vulnerability intelligence process that matches published vulnerabilities with installed applications, there will be a reliance on the vulnerability scanner to catch vulnerabilities after the fact. This can leave systems and applications vulnerable for a much longer time depending on how fast the vulnerability scanner incorporates those checks in the scanner combined with the frequency of running the scanner.
2) Not having well-defined system and applications owners
Running a vulnerability scanner and identifying vulnerabilities is relatively easy. On the contrary, it is hard to get things patched or remediated if you can’t find the owner. This #FAIL is really an indicator of the maturity of the IT organization. Less mature IT organizations will not have a well-defined process for full lifecycle IT asset management. A key component of IT asset management is definition of owners. Mature IT organizations will have clear documentation on the business owner, application owner and system owner for each IT asset. Once the owner is known, it is simple to direct the workflow for vulnerability management and create tickets for remediation.
In my experience, this situation of not knowing who the owner is can also happen in a frequent merger environment where IT staff is rapidly consolidated. If you can’t find an owner for a system, then one course of action is to effectively perform a controlled disconnect of the system. Of course, prior to disconnecting the system, you will want to exhaust all methods for determining the owner, including using network sniffing tools to identify network connections that might indicate its purpose or users. If users complain about a missing system or application, then it might be important enough to assign an owner amongst the existing IT staff and start testing the remediation recommendations. If there are no complaints, then turning it off saves the company money and eliminates the threats posed by abandoned technology. For third party applications, the easy choice for remediation if no application owner steps forward is to either patch or remove.
3) Ignoring vendor vulnerability announcements
Many times the approach of “if it ain’t broke, don’t fix it” is reasonable, but this really depends on knowing if it is broke. This #FAIL turns the enterprise vulnerability management program into a reactive finger-pointing exercise rather than a proactive vulnerability identification, threat removal, risk reduction process benefiting the enterprise. There are a few causes for this situation and most are intentional. One cause is just looking where you want to see, maybe because you have an automated patch management system in place for that area. I’ve had IT managers at global companies tell me the company only has Microsoft technology implemented. While OS identification is not a precise science, it should be able to pick up all the “hidden” IT assets that too-narrowly-focused IT managers don’t want to see. If you can’t trust your IT asset inventory, then the first few enterprise-wide vulnerability scans can set the technology baseline that can be used to define the systems and application vendors that should be monitored for vulnerability announcements.
Another cause for ignoring vendor vulnerability announcements is the practice of bundling everything together in a quarterly set of patches for dozens of applications in what is called the quarterly critical patch update. This presents a massive jumble of patches that overwhelms the application team to the point that they choose to ignore the patch rather than spend the time to parse the full list of vulnerabilities, determine what is applicable to their current installation, create their remediation actions and start the testing process.
4) Ignoring vulnerability scanner results
Even more common than ignoring vulnerability announcements is ignoring vulnerability scanner reports. This #FAIL has many potential causes, but they mostly revolve around lack of resources and no consequences. When an IT organization has an ingrained culture of install and don’t touch it unless something breaks, it is hard to change that culture into a continuous management process with frequent periodic maintenance and patching. Some of the excuses when presented with the list of vulnerabilities and missing patches are:
- But it’s too hard to fix all those items.
- I can’t take the system down.
- I don’t have the manpower to investigate and test all those changes.
- We’ve managed so far without patching.
In the past, unless the CIO took a personal interest in the maintenance and patching of IT assets (maybe as a result of not patching and having the entire network go down because of Code Red, Sasser or Blaster) usually there were no consequences to not remediating vulnerabilities. Ensuring regulatory compliance with PCI DSS or HIPAA/HITECH today means having a plan to address vulnerabilities and the consequences for ignoring vulnerabilities can be severe. This is an easy item for the PCI QSA to fail a company.
5) Not mining the data for interesting things
“There’s gold in them there hills.” What this translates to is that there is great information buried in the scan results. Of course, the standard reports can identify system and application vulnerabilities, but the more you dig into the data, the more gleaming nuggets you can find. This #FAIL is organizations that don’t devote the effort into looking closely at the data.
What might be interesting in that mountain of data? How about:
- What systems are running old versions of operating systems or applications? Do you have Windows NT systems still running on the network? Are there systems still running Adobe Reader version 4, 5, 6, 7 or 8? Why?
- What systems are identified to be outside the norm? This will indicated devices, appliances, and non-approved systems attached to your network. Does the official IT asset inventory know about these?
- How many systems did not allow a credentialed scan? Why?
- How closely does the application inventory identified by the vulnerability scanner match the official list? Depending on your organizations policy and controls regarding end user capability to install software, you may find old or non-standard versions of third party software.
Stay tuned for SVM Part 3 which discusses some quick wins and some potential gotchas.