How To Understand a Vulnerability Scan Report – Part 1 – The IP Address

Part 1 of a multiple part series explaining vulnerability scan data and nuances of how that data should be used.


  • IP Address

    • This is (of course) the network address that the vulnerability was found on.
    • The IP address is the one piece of data you can count on to always be in your vulnerability scan data.  A vulnerability scanner always must have an IP address to probe for vulnerabilities, so this is the key starting point for any vulnerability scan data.
    • Some of your customers or app/developers infrastructure developers may not understand networking very well, so it is a good idea to supply dns name and/or host name to them also.  I will cover those in a later post.
    • One host (server, machine, appliance.. whatever you want to call it) may have multiple IP addresses. Correcting a vulnerability may resolve the finding on multiple IP addresses.  Some common uses of multiple network adapters listed below..
      • Main Adapter
      • Backup Network Adapter
      • HeartBeat/Cluster Network Adapter
      • Management Card (This is often an embedded device on its own and not managed by the host OS)
      • Other (Redundant adapter, Crossover cable adapter, Some servers may have adapters on multiple networks for various reasons)
    • One good approach for vulnerability footprint reduction is to ask the server and application owners if their services and/or apps need to be available on all the IP addresses on the system where the service is found running.
      • For example.. Apache may be found running on all the IP addresses on the server.. It usually does not need to be on all of them.
    • The IP address listed may actually be the Virtual IP (VIP) that points to a particular port on a webserver. (ports will be covered later)
      • One Host/Webserver may have multiple websites running on it. The VIP that you see in the vulnerability scan may be redirected by a network load balancer to a particular listening port on one of the webserver IP addresses. This means there can easily be a Many-to-One relationship of IP addresses to one server or group of servers..
      • In this case you will need to have information about the load balancer configurations of your environment to determine which webserver port/instance and/or server may have the vulnerability in question. This information should show the VIP and which port on another IP address that gets the traffic from that VIP. The VIP is often facing a non-trusted network like the Internet, or just used for load balancing and/or to allow a webserver to be used more efficiently.
    • Other– The IP address can often tell you other information. Based on your network design it could tell you physical location, system type, network zone (like DMZ) etc.. It is a good idea to understand how your company provisions and allocates IP addresses and networks. This information can often allow you to understand more about an IP address than what they vulnerability scanner tells you. – Data Based Security Site

I recently made a small contribution to

If you are in the IT  Security field and ever need to analyze CVE data or search for security issues on a certain product or Vendor, this is a great site to use. This is one of the few sites that are purely focused on security data and details that can be used to do objective security research.

I would encourage anyone else out there that finds this site useful to make a contribution. If your company could use their data or services I would encourage you to become a corporate sponsor.

We need more of these types of sites that are working to improve transparency and availability of IT Security related data.

Bird Flu and Security Bugs – Research Gone Awry?

Several times in the news lately I’ve heard about the Bird Flu Research controversy. Each time I hear about this controversy I want to compare it to the recent controversy around SCADA IT Security research being published directly to security testing tools companies. I don’t think it is a stretch to compare these 2 topics. While there are some obvious differences, many of the arguments are similar..


To Publish or Not to Publish?

One of the main concerns around the bird flu research is whether the results and methodology of the research should be fully published.

The premise used to justify publishing vulnerabilities in the IT security industry is that exposing IT security vulnerabilities and making them easier to exploit forces companies to patch these vulnerabilities and create more secure software and systems over the long run. I believe this premise is true. Most companies would not enhance the security measures in their code or systems unless necessary. The cost for developing software is mainly in the initial development, anything afterwards is maintenance cost that is typically a cost center and not a revenue generator. So any company will attempt to keep their maintenance (creating code patches) cost as low as possible.

This same premise has been used to defend publishing the research on the bird flu. A nation that wanted to do its own malicious bird flu research could do that if they wanted, so we should understand and be prepared for that scenario. So just like we should improve software or systems, we should improve our ability to identify and respond to a malicious strain of bird flu.


Does the research help or hurt??

The concern around the bird flu research information is that malicious actors could use the information to create some kind of biological weapon. The very same type of concern exists around IT security research. If you show a bad guy how to exploit a vulnerability on a system, they are more likely to use it, or it makes greatly reduces the time and effort needed to create their own exploit.

Whenever you reduce the effort (time, cost, risk) of exploiting a vulnerability or showing how to develop a stronger virus, you initially will increase the risk of people using that information for their own purposes.

So does it help or hurt? Initially it hurts. The publishing of the vulnerability forces companies to dedicate resources to the analysis, development, and deployment of patches. After the initial pain, it helps the companies by ensuring their code and systems are more secure. The long term effect that *should* happen is that development companies should change their processes to ensure they develop code security, consumer companies should ensure they have enough resources to keep systems patched, and the whole cycle should gradually become a less hectic normal maintenance routine.

For Bird Flu research it can help to ensure that public administrations prepare and have plans in place for dangerous virus outbreaks.


What is the real question that needs to be answered?

Is the initial increase in risk caused by releasing research information worth the mid-term and long-term reward of improving the products or being more prepared for a lethal virus outbreak?

Unfortunately I wasn’t able to find any real data to support whether security research disclosure truly helps improve security over the long run.  I think that it does, and I think the IT Security industry believes it does, however this seems to largely be “common wisdom” and not based on any hard facts. (please correct me if I am wrong)


Conflict of Interest?

There are some obvious conflicts of interest that create “grey” areas in IT Security research. When a security researcher works closely with vulnerability testing companies to incorporate working exploits for vulnerabilities they have found instead of working with the companies that publish the software with those exploits, it makes me question their motives.  Also, if my company sells security testing software, then having a check for a vulnerability that no other company has and/or has no patch is a competitive advantage. Is the security company truly first concerned with the security of their customers or with their sales?

For virus or disease research I don’t see anything like this happening that I know of. I suppose a researcher could work directly with a pharmaceutical company, but the whole concept doesn’t apply very well to disease research and pharma development.

How to Resolve *Some* of these Questions

Security testers and security companies that deal with exploits and vulnerabilities should be very clear about what responsible disclosure guidelines, code of ethics, or methodology they follow when disclosing vulnerabilities.(if any)  Customers of security services or security testing software should ensure that they purchase from companies or researchers that align with their own code of ethics.

So what about Bird Flu research? Why can’t the National Academy of Science or World Health Organization provide guidance around research regarding increasing the virulence of a virus or disease? Research labs or Universities should clearly define what guidance or methodology they follow around this type of research and it should be a condition of disclosure when applying for funding and grants.


Why You Must Prioritize IT Vulnerability Risks

Why You Must Prioritize IT Vulnerability Risks – A common sense explanation.

  • Why should you prioritize the risks in your IT network?

  • Why can’t you just fix ALL the problems?

Unless you work in a company that has unlimited resources and you have absolute support at all levels for remediating the vulnerabilities in your environment, you MUST prioritize the issues that cause the most risk to your IT environment.


Analogy.. “The To-Do List”

Say your wife gives you a list of 150 things to get done on a Saturday afternoon.. How many can you realistically get done? Maybe 5? Maybe 10 if the tasks are small.

If you have a large network, you likely have many possible vulnerabilities. Say you have a relatively small list of 300 security issues found from vulnerability scans and other security assessments and tests.. Can you realistically expect all the teams that would own fixing those issues to drop everything they are doing and fix the “list” of issues you give them?

How much security remediation work can you really expect to accomplish? The answer for these types of questions is more dependent on how your organization functions than on any calculations or math.  Every IT shop is trying to fight for resources to..

1) Implement customer projects.

2) Upgrade and/or modernize their own infrastructure.

3) Implement their own strategic initiatives.

4) Have a work/life balance.


Where does that leave working on tasks to fix issues that have been found through security testing?

The naive answer is to say that security should always be a top priority and the teams should figure out a way to get the work done. For those that work in the real world it simply is not that easy.

Resources such as budget, hardware, and time is limited. Some IT shops are fighting to survive. If they have to stop business driven projects for 3 months to fix security issues their business customers may choose to use other options.

What is the answer?

The answer is to use Risk Analysis and Risk Management techniques to determine what the highest risk vulnerabilities are to your IT environment. This is called using a “Risk Based Approach.”  Simply put, it means to fix the most risky things first. You would think this is common sense, but you would be wrong. There is often a reflexive response to any type of possible security issue. The reflex response is “just fix it”. If there are 5 issues, then just fix them. If there are 200 issues, then just fix them.

The problem is that most decent sized companies will have many possible issues. You simply can not have a completely secure environment without making the environment unusable.  I go back to the example of having a list of 150 tasks to complete in one day. It simply isn’t possible. However, could you get 5 done? Probably so. Could you get a small amount done on 20 tasks? Probably so.

So which one is better? Getting 5 security issues completely resolved or 20 issues partially completed in a year? That needs to be a management decision based on good risk analysis of the issues.

Fixing security issues is an effort like any other.

The whole point of this post is to get you to understand that resolving security issues is no different from any other project or effort. No company or organization can implement every good idea. They must prioritize in order to get the best results from their efforts.

Resolving security issues is a work effort just like any other in an IT organization. The effort must be prioritized against all other efforts so that they can get the proper focus and funding. If you don’t have focus on a few things, then you get very little accomplished, and your efforts are spread thin.

Final Analogy… Pruning…

Every organization is like a rose bush or a grape vine. In order for nutrients to allow the main stems and fruit to truly mature and reach its full potential, you must prune the small branches and vines that use up the resources of the plant that don’t add any fruit or flowers. The small branches use energy and resources, and eventually will cause the plant to be poor producer of fruit or flowers. Why? Because no focus was devoted to the things that mattered.

Final Point : To get things done, you must prioritize and be able to focus your energy and effort on what matters most.

NorthWest Arkansas ISSA Presentation

I’m giving a high level presentation over the PCI-DSS requirements around Vulnerability Management and Penetration testing for our April 5 ISSA meeting.

Most of the details will be in Q&A and discussion. So don’t expect a lot of deep content in the powerpoint slides linked below.


The meetings are typically held on the first Tuesdays of each month at Whole Hog Cafe. A great Memphis style barbecue restaurant in Bentonville Arkansas.


What is OpenDNS and What can it do for me?


OpenDNS is a Domain Name System (DNS) service that you can use as an alternative to the DNS system that your internet service provider offers.

For those not familiar with DNS , it can be summarized as the service on the internet that takes the website address or server name you type in and translates that into something your computer and systems on the Internet can use to find your website or server.

So why use OpenDNS?

Whether your know it or not, when you are hooked up to your cable modem or DSL line, your internet service provider (ISP) automatically tells your systems which “DNS” servers they should use. Is this a bad thing? No, but using OpenDNS can give you much more functionality than than the DNS servers your ISP gives you to use.


What does OpenDNS do that my ISP’s DNS servers don’t do?

The OpenDNS servers offer many services that regular DNS servers do not.  Below is a list of the services that OpenDNS can provide.

  • Phishing & Botnet Protction
  • SmartCache
  • Web Content Filtering
  • Constant Updates
  • Whitelist/Blacklist Mode
  • Detailed Statistics
  • Typo Correction
  • Shortcuts

Isn’t there sofware I could install that does this?

Yes. But the problem with software is that it only works on each machine after you install it. The software must also be updated from time to time. It is also possible to bypass web filtering software installed on computers if you really want to. By using DNS servers to provide this function, you don’t have to install or maintain any software on your computers, it doesn’t slow anything down, and it is much easier to maintain. Once you are using OpenDNS it is maintenance free.

Also, does your website filtering software run on your iphone or samsung tablet or MAC or Linux machine? Probably not. But OpenDNS can provide the functionality at your home without having to install anything.


So how do I use OpenDNS?

Go to and sign up for an account. Once you do you can find information on how to configure your computers to start using OpenDNS.  OpenDNS is an easy way to help restrict access to websites that are inappropriate for children and protect your computers from bad websites overall. The alternatives require more work or more cost, and don’t typically provide any more features.


Unauthenticated vs Authenticated Vulnerability Scans and Testing

What is the difference between “Authenticated” and  “Unauthenticated” Scanning or Testing?

Unauthenticated =  No usernames and passwords are used in the scanning or testing.

  • This means if your website allows users to create a shopping cart tied to a user, the testing will not attempt to use a username and password to replicate a user’s usage of that shopping cart.
  • This type of testing is typically less intense because it will only be able to find basic configuration issues or input and output validation type errors that don’t include the code base that handles user transactions like shopping carts.
  • Unauthenticated scanning and testing do not attempt username and password combinations to attempt to logon to your system.


Authenticated = The scanning or testing is able to use usernames and passwords to simulate a user being on that system or website.

  • Authenticated testing can be much more intense and have the possibility of causing impact to your website or system.
  • Authenticated testing will usually find more vulnerabilities than unauthenticated testing if a vulnerability scanner is given credentials into a system. This is simply due to a scanner’s ability to see more of the system due to being able to get “inside” the system and validate issues instead of the guesses that a scanner or tester must make without authentication.
  • Authenticated testing has much better code coverage on applications since it can simulate much more of the user based functionality like transactions.
  • Some authenticated scans can simulate “brute-force” style attacks, which could cause account lockouts depending on your system configurations.


Why should I care?

  • Authenticated testing is much more thorough and is often able to find more issues than unauthenticated. However, it is also more likely to cause issues on a system or application.
  • Since authenticated testing will often find more, you will spend more time parsing through data and trying to determine which findings are higher risk.
  • Finally, unauthenticated testing alone will not simulate targeted attacks on your application or system, and is therefore unable to find a wide range of possible issues.


Ask yourself these questions to decide what kind of testing or scanning you need.

  • What is the purpose of the scan or test? (Specific compliance requirement??)
  • Do my scanning or testing requirements give preference to authenticated or unauthenticated testing?
  • Do I want to simulate what a user on the system could do? (Go with Authenticated)
  • Do I want to start at the highest risk findings that any scanner or user on my network could find? (Go with unauthenticated)
  • Is this the first time the system or network has ever been scanned or tested? (Go with unauthenticated unless you have other requirements.


So what should my approach be?

Using a risk based approach, you could start with unauthenticated scanning and testing because it will typically find the highest risk and most significant issues. Once you have the unauthenticated findings, you can gradually start authenticated testing once you have a good comfort level that it will not impact systems.

Note*** In large environments you may need to be wary of old printers and devices that may have old network stacks. You will typically only see scan issues on legacy network appliances or devices like old network printers.

IronBee – Open Source Web Application Firewall

IronBee Logo

Qualys, Inc. just recently announced IronBee,  a new open source web application firewall project.

The project appears to be funded mainly by Qualys, Inc, but Akamai also appears to have some influence based on the press release published on Feb 14, 2011.

This new project is led by some of the same folks that originally developed ModSecurity, but appears to be more focused towards widespread usability and a “cloud” or Software as a Service design.

Why WAF?

Web Applications Firewalls (WAF’s) are not used nearly enough where they could be helpful to block web application vulnerabilities.

When I have discussed the non-usage of WAF”s with various folks that manage webservers, their answer was that they added another layer of complexity they did not want to manage.

IronBee seems to be answering many of the issues folks have had with WAF’s by offering…

  • Ease of implemenation
  • Portability of rules
  • Flexibility of implementation

There are many reasons to use a WAF, and projects such as IronBee are reducing the reasons not to use one.

The Business of Web Application Security

  • I can see Akamai using IronBee as part of their WAF solution offered to customers.The flexibility of implementation may save them costs over their current WAF solutions.
  • Companies like Qualys  could offer a cloud based WAF like IronBee to help protect the customers that are already using their vulnerability scanning services.
  • Web Hosting providers like RackSpace or GoDaddy could more easily offer a WAF like IronBee as a default part of their service, or charge a slightly higher fee to protect your website with a WAF. This concept is already being used with HyperGuard on Amazon Web Services.

I’ll be keeping track of the IronBee project, and possibly offering help where I can.

PCI Penetration Testing

The Payment Card Industry Data Security Standard requirement 11.3  requires that you perform annual external and internal penetration testing that also includes application testing. Below is a guide on how to handle your testing requirements.

*Disclaimer* Every company and situation is different when it comes to PCI. Always communicate with QSA(s) when planning your testing to ensure that you are meeting all the needed requirements.

Scope –

First you have to decide how you are going to define the scope of what is going to be tested. This includes which networks and applications are to be tested. This scoping process may also include deciding what type of testing will be done such as black-box, white-box, goal based, etc… The PCI-DSS leaves this scope determination up to your company.

  • Be completely transparent on the possible ingress and egress points to your PCI data.
  • The goal is to find out your weak points, not to stump the penetration testers.
  • If you are limited by cost on the number of IP’s that can be penetration tested, then I would ask if the testers could start with the 3 sets of information below and come up with their own limited list of IP’s to test, or do this yourself.

I’ll refer to these 3 sets of information below as the 3 core vulnerability information sources.

  1. Full vulnerability scan (unauthenticated or authenticated)
  2. Firewall rule exports
  3. Network diagrams

External Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Perform specific application testing on your sites that handle any type of PCI data.

Once your sites start coming up clean regularly, you should consider moving to authenticated testing if you aren’t already doing that type of testing.

Internal Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Hand over information on any specific solutions and dataflows that handle PCI data.
  • Use those firewall rule exports so you don’t forget to test the servers that have connectivity into your PCI data network if it is separated from your other networks.


  • Focus on any applications that handle PCI data.
  • Do not limit testing to only PCI data handling applications. You should also do penetration testing on system management applications, software distribution applications etc..
  • The best application penetration test is to do a source code review in coordination with functional testing.


Once you have all the results back from the testing your real work begins.

  • Take a risk based approach to determine what needs remediation.
  • There is flexibility in your remediation under the PCI-DSS, so chose a logical and consistent manner in how you choose to remediate issues.

As always, it would be good to get your approach approved by a QSA if possible.

Final Thoughts…

The goal of penetration testing is to simulate a malicious (or curious) hacker’s attempt to access sensitive data or systems. You should view this as an opportunity to improve security and not a threat.

Be transparent, use good testers, and try to change your testers every couple of years.  Penetration testing is often more of an art than a science, so different people will often find different holes.