Vulnerability Scanning in the Cloud – Part 1

This is the first in potentially a series of posts regarding vulnerability scanning in the cloud and some of the related challenges and helpful tips.

I started looking around for any good sources or posts on the topic of vulnerability scanning in the cloud.  In this case, an infrastructure as as service (iaas) scenario for private or public cloud.  I didn’t find anything.

How it starts..

When you get the email or call that goes something like.. “Hey, what are we scanning in our <vendor name here> cloud”??? .. In a perfect world you just say, “When we worked with everybody to setup and plan our cloud usage we had vulnerability scanning designs built in from the beginning.. We are good.”

Or, you start sweating and realize nobody ever brought you in to any planning or discussions and there are already large cloud deployments that you aren’t scanning.

Or maybe you are a consultant or service provider going in to an environment and setting up vulnerability scanning in a customer cloud. These posts should be helpful for people that are in the planning stages or trying to “catch up” with their cloud groups or customers.

Dynamic nature of the cloud

Most clouds give you the ability to dynamically provision systems and services, and as a result, you dynamically provision IP addresses. Sometimes these IP addresses are from a certain range, and often, especially for Internet facing systems, these IP addresses are from a large pool of addresses shared with other customers.

In these large dynamic ranges, it is common for the IP address you used today, to be used by another customer tomorrow. 

This dynamic nature is great for operations, but can cause some challenges on tracking assets. 

Asset management is different

Traditional vulnerability management has been very tied to IP addresses and/or DNS names. In cloud scenarios, assets are often temporary, or may not have DNS names. Sometimes your dns names for PaaS type services are provisioned by the cloud provider, with little or no control from your IT group.

Most cloud providers have their own type of unique identifiers for assets. These unique identifiers are what need to be used for asset tracking.. IP addresses, and sometimes DNS names are just stateful metadata for your asset. 

Also, cloud has different types of “objects” that can be given IP addresses beyond traditional compute system interfaces. Certain services can be provisioned in cloud from a PaaS solution that are dedicated to your tenancy/account, and they get their own IP address. Are these your asset? Many times you may have some control over the content and data on these services even though you don’t manage most of the underlying solution. 

In general, the whole approach for asset management in cloud is that your assets are tracked by the cloud provider, and you use their API’s to query and gather information on your assets.

Your vulnerability analysis and asset analysis needs to become dynamic and based on the data returned from your asset queries. This is definitely not a bad thing. Most big companies struggle with solid asset management because there are always ways to circumvent traditional asset management. (This is why network traffic based asset management is becoming so popular) 

Now, with cloud, as long as you are using the API, and know what tenancies you have, you can get a good list of assets… However, this list is short lived… You need to consistently query the API’s to get a good list. Some cloud providers are able to provide a “push” notification or provide “diffs” of what has come online or gone away in X amount of time. I think that is the future best practice of cloud asset management. Real time visibility into what is coming and going. 

 

Capacity is costly..

One major concept and value of cloud is only using and paying for capacity you need.

When it comes to information technology, this “costly capacity” in IaaS essentially comes down to

  1. Network usage (sending data over the network)
  2. Storage usage,(disk space, object space, etc)
  3. Compute usage (CPU)..

Classic Vulnerability scanning can typically be performed 2 different ways,

  1. Either scanning over the network from a scanning system, or
  2. By installing a local agent/daemon/service on the host that reports up the vulnerability data.

Both of these approaches use all 3 types of capacity mentioned above in your cloud, but mostly network and CPU usage.

Scanning over the network — Network Usage

Your cloud vendor’s software defined networking can have huge capacity, or it could remind you of early 90’s era home networking.

One of the major considerations for network based scanning is determining where your bottlenecks are going to be.

  • Do you have virtual gateways or bandwidth caps?
  • Do you have packet rate caps?
  • Are you trying to scan across regions or networks that may be geographically disperse with high latency and/or low bandwidth?

Cloud networking doesn’t just “work”… in many cases it is far more sensitive than physical networks. You need to carefully look at the network topology for your cloud implementations and base scanner placement based on your topology and bottleneck locations. Depending on your network security stack, you may even need or want to avoid scanning across those stacks.

Agents

Agent based scanning is starting to be one of the preferred options in some cloud iaas implementations, because you can just hope that every host reports up it’s vulnerability data when it comes online. This is a nice approach if you have good cooperation from your infrastructure groups to allow your agent to be deployed to all systems.

However, agents likely will not be able to go on every type of resource or service with an IP, such as 3rd party virtual appliances.  You will still need network scanning to be able to inspect some virtual systems or resource types such as PaaS deployed services.

– Most agents typically lack the ability to see services from the perspective of the “network”, which is often where the most risk resides. For example, they can’t talk to all the services and see the ciphers or configurations being exposed to network clients.
 
So, regardless of what you may have been told, there is no cloud or vendor provided vulnerability scan agent that will give you full visibility to your cloud resources. You still need network scans.
 

Even though agents won’t solve all your problems,  you probably won’t be hitting packet rate caps or throughput issues, since they mostly just push up their data in one stream on a regular schedule. So agents can allow you to avoid some of the network issues you might hit otherwise.

 
Here are some questions you need to consider for vulnerability scanning in the cloud…
 
  • How much cpu impact will there be from network scanning or agent scanning? The act of scanning will use some capacity.
 
  • Should you size your cloud capacity to allow for vulnerability management? (yes)
 
In summary, vulnerability management in the cloud is different.
 
Why?
 
  • Dynamic assets.
  • API driven asset management
  • Cloud has more “things” as a service than what one solution can handle.
  • Container Services
  • PaaS
  • Functions/Serverless
  • SaaS/Services
     

How to handle vulnerability management in the cloud?

  • Take a look at all the services your cloud provider offers that you are planning to use.
  • Create an approach for each type of scenario and thing that will be used.
  • Some cloud providers are starting to build in some amount of vulnerability management natively into their platforms. Leverage these native integrations as much as possible.

How To Understand a Vulnerability Scan Report – Part 2 – The Network Port

How To Understand a Vulnerability Scan Report – Part 2 – The Network Port

Part 2 of a multiple part series explaining vulnerability scan data and nuances of how that data should be used. Part 1 was about IP addresses.

 

  • Network Port
    • This is the network Port identifier number (1 through 65535) where the vulnerability scanner found a vulnerability.
    • The port number is not always provided in some vulnerability scan reports, although it is a critical piece of information, as will be discussed below.
    • The teams that own the systems or applications with vulnerabilities will often be unfamiliar with network ports until they do some further research on their application or system.
    • In part 1 of this series it was discussed that a system can have more than 1 IP address. The level of complexity increases with ports because each IP address can have up to 65,535 tcp and/or udp ports.
    • It is very unusual for most IP addresses to have more than 100 or so ports open, so many vulnerability scanners will consider a system with many ports open to be a firewall configured to respond on all ports as a defensive measure.

     

    What does a port number tell me?

  • A listening port tells you that some piece of software is accepting network socket connections on a specific network port. Your vulnerability is on the software that is using that port.
  • The port number should be your starting point to determine which service or application is listening for incoming socket connections. This service or application port listed in your vulnerability scan is what typically has the vulnerability.
  • There are many common ports used that are easy to identify.
  • Once you know what the program or service is, your next step is often to contact the person or team responsible for managing that service or application.
  • One nice thing that most vulnerability scanners will do is give you the text response that the vulnerability scanner got from the port when it initially fingerprinted that port.
    • This text info is valuable because it will often give you the response header/banner/response from the service and often has enough information for you to understand what the service is, even if you had no previous information about that port.

     

    Okay, that’s nice, but if I see a webserver vulnerability, I already know to call the webserver folks.

  • It’s not quite that easy. Run a port scan (all ports) on a heavily used server and you might be surprised how many http/https protocol servers are running.
    • Even dedicated webservers will often have many different instances of a webserver running, each one on different ports. Being able to tell the owning team the specific port that had the vulnerability finding is critical to being able to determine the source of the problem.
    • If the vulnerability is application related, knowing the port is likely how you will determine the application team that needs to remediate the vulnerability finding. The team that manages the webserver may know which application instance is running on which port, and can direct you to the proper application team.

    Load Balancing can throw you off.

  • Network Load Balancers can take traffic directed at one port on an IP address, and redirect that traffic to different ports on different IP addresses.
  • This can obviously cause some issues for you since you will see the port on the Virtual IP address on the load balancer as having the vulnerability.
  • This is a more common scenario you will face when scanning servers from outside a DMZ, from the Internet, or on a hosting or cloud environment.
  • It is critical for you to have the network load balancer configuration and be able to trace which IP addresses and ports are actually responding with vulnerabilities. Without this information you are stuck at the Virtual IP address without being able to go any further to find the true IP and port that has the vulnerability.

 

Why You Must Prioritize IT Vulnerability Risks

Why You Must Prioritize IT Vulnerability Risks – A common sense explanation.

  • Why should you prioritize the risks in your IT network?

  • Why can’t you just fix ALL the problems?

Unless you work in a company that has unlimited resources and you have absolute support at all levels for remediating the vulnerabilities in your environment, you MUST prioritize the issues that cause the most risk to your IT environment.

 

Analogy.. “The To-Do List”

Say your wife gives you a list of 150 things to get done on a Saturday afternoon.. How many can you realistically get done? Maybe 5? Maybe 10 if the tasks are small.

If you have a large network, you likely have many possible vulnerabilities. Say you have a relatively small list of 300 security issues found from vulnerability scans and other security assessments and tests.. Can you realistically expect all the teams that would own fixing those issues to drop everything they are doing and fix the “list” of issues you give them?

How much security remediation work can you really expect to accomplish? The answer for these types of questions is more dependent on how your organization functions than on any calculations or math.  Every IT shop is trying to fight for resources to..

1) Implement customer projects.

2) Upgrade and/or modernize their own infrastructure.

3) Implement their own strategic initiatives.

4) Have a work/life balance.

 

Where does that leave working on tasks to fix issues that have been found through security testing?

The naive answer is to say that security should always be a top priority and the teams should figure out a way to get the work done. For those that work in the real world it simply is not that easy.

Resources such as budget, hardware, and time is limited. Some IT shops are fighting to survive. If they have to stop business driven projects for 3 months to fix security issues their business customers may choose to use other options.

What is the answer?

The answer is to use Risk Analysis and Risk Management techniques to determine what the highest risk vulnerabilities are to your IT environment. This is called using a “Risk Based Approach.”  Simply put, it means to fix the most risky things first. You would think this is common sense, but you would be wrong. There is often a reflexive response to any type of possible security issue. The reflex response is “just fix it”. If there are 5 issues, then just fix them. If there are 200 issues, then just fix them.

The problem is that most decent sized companies will have many possible issues. You simply can not have a completely secure environment without making the environment unusable.  I go back to the example of having a list of 150 tasks to complete in one day. It simply isn’t possible. However, could you get 5 done? Probably so. Could you get a small amount done on 20 tasks? Probably so.

So which one is better? Getting 5 security issues completely resolved or 20 issues partially completed in a year? That needs to be a management decision based on good risk analysis of the issues.

Fixing security issues is an effort like any other.

The whole point of this post is to get you to understand that resolving security issues is no different from any other project or effort. No company or organization can implement every good idea. They must prioritize in order to get the best results from their efforts.

Resolving security issues is a work effort just like any other in an IT organization. The effort must be prioritized against all other efforts so that they can get the proper focus and funding. If you don’t have focus on a few things, then you get very little accomplished, and your efforts are spread thin.

Final Analogy… Pruning…

Every organization is like a rose bush or a grape vine. In order for nutrients to allow the main stems and fruit to truly mature and reach its full potential, you must prune the small branches and vines that use up the resources of the plant that don’t add any fruit or flowers. The small branches use energy and resources, and eventually will cause the plant to be poor producer of fruit or flowers. Why? Because no focus was devoted to the things that mattered.

Final Point : To get things done, you must prioritize and be able to focus your energy and effort on what matters most.

NorthWest Arkansas ISSA Presentation

I’m giving a high level presentation over the PCI-DSS requirements around Vulnerability Management and Penetration testing for our April 5 ISSA meeting.

Most of the details will be in Q&A and discussion. So don’t expect a lot of deep content in the powerpoint slides linked below.

PCI_Vuln_Pen_ISSA_March_2011_ppt

The meetings are typically held on the first Tuesdays of each month at Whole Hog Cafe. A great Memphis style barbecue restaurant in Bentonville Arkansas.

Unauthenticated vs Authenticated Vulnerability Scans and Testing

What is the difference between “Authenticated” and  “Unauthenticated” Scanning or Testing?

 

Unauthenticated =  No usernames and passwords are used in the scanning or testing.

  • This means if your website allows users to create a shopping cart tied to a user, the testing will not attempt to use a username and password to replicate a user’s usage of that shopping cart.
  • This type of testing is typically less intense because it will only be able to find basic configuration issues or input and output validation type errors that don’t include the code base that handles user transactions like shopping carts.
  • Unauthenticated scanning and testing may attempt username and password combinations to attempt to logon to your system, but they typically only check to see if the credential is valid, and will not use it to login to the system to perform further testing.

Authenticated = The scanning or testing is able to use usernames and passwords to simulate a user being on that system or website.

  • Authenticated testing can be much more intense and have the possibility of causing impact to your website or system.
  • Authenticated testing will usually find more vulnerabilities than unauthenticated testing if a vulnerability scanner is given credentials into a system. This is simply due to a scanner’s ability to see more of the system due to being able to get “inside” the system and validate issues instead of the guesses that a scanner or tester must make without authentication.
  • Authenticated testing has much better code coverage on applications since it can simulate much more of the user based functionality like transactions.
  • Some authenticated and un authenticated scans can simulate “brute-force” style attacks, which could cause account lockouts depending on your system configurations.

Why should I care?

  • Authenticated testing is much more thorough and is often able to find more issues than unauthenticated. However, it is also more likely to cause issues on a system or application.
  • Since authenticated testing will often find more, you will spend more time parsing through data and trying to determine which findings are higher risk.
  • Finally, unauthenticated testing alone will not simulate targeted or successful attacks on your application or system, and is therefore unable to find a wide range of possible issues.

Ask yourself these questions to decide what kind of testing or scanning you need.

  • What is the purpose of the scan or test? (Specific compliance requirement??)
  • Do my scanning or testing requirements give preference to authenticated or unauthenticated testing?
  • Do I want to simulate what a user on the system could do? (Go with Authenticated)
  • Do I want to start at the highest risk findings that any scanner or user on my network could find? (Go with unauthenticated)
  • Is this the first time the system or network has ever been scanned or tested? (Go with unauthenticated unless you have other requirements.

So what should my approach be?

Using a risk based approach, you could start with unauthenticated scanning and testing because it will typically find the highest risk and most significant issues. Once you have the unauthenticated findings, you can gradually start authenticated testing once you have a good comfort level that it will not impact systems.

Note*** In large environments you may need to be wary of old printers and devices that may have old network stacks. You will typically only see scan issues on legacy network appliances or devices like old network printers.

PCI Penetration Testing

The Payment Card Industry Data Security Standard requirement 11.3  requires that you perform annual external and internal penetration testing that also includes application testing. Below is a guide on how to handle your testing requirements.

*Disclaimer* Every company and situation is different when it comes to PCI. Always communicate with QSA(s) when planning your testing to ensure that you are meeting all the needed requirements.

Scope –

First you have to decide how you are going to define the scope of what is going to be tested. This includes which networks and applications are to be tested. This scoping process may also include deciding what type of testing will be done such as black-box, white-box, goal based, etc… The PCI-DSS leaves this scope determination up to your company.

  • Be completely transparent on the possible ingress and egress points to your PCI data.
  • The goal is to find out your weak points, not to stump the penetration testers.
  • If you are limited by cost on the number of IP’s that can be penetration tested, then I would ask if the testers could start with the 3 sets of information below and come up with their own limited list of IP’s to test, or do this yourself.

I’ll refer to these 3 sets of information below as the 3 core vulnerability information sources.

  1. Full vulnerability scan (unauthenticated or authenticated)
  2. Firewall rule exports
  3. Network diagrams

External Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Perform specific application testing on your sites that handle any type of PCI data.

Once your sites start coming up clean regularly, you should consider moving to authenticated testing if you aren’t already doing that type of testing.

Internal Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Hand over information on any specific solutions and dataflows that handle PCI data.
  • Use those firewall rule exports so you don’t forget to test the servers that have connectivity into your PCI data network if it is separated from your other networks.

Applications

  • Focus on any applications that handle PCI data.
  • Do not limit testing to only PCI data handling applications. You should also do penetration testing on system management applications, software distribution applications etc..
  • The best application penetration test is to do a source code review in coordination with functional testing.

Remediation

Once you have all the results back from the testing your real work begins.

  • Take a risk based approach to determine what needs remediation.
  • There is flexibility in your remediation under the PCI-DSS, so chose a logical and consistent manner in how you choose to remediate issues.

As always, it would be good to get your approach approved by a QSA if possible.

Final Thoughts…

The goal of penetration testing is to simulate a malicious (or curious) hacker’s attempt to access sensitive data or systems. You should view this as an opportunity to improve security and not a threat.

Be transparent, use good testers, and try to change your testers every couple of years.  Penetration testing is often more of an art than a science, so different people will often find different holes.

Vulnerability Management – Continuous vs Batch

Reasons why a “Continuous Vulnerability Assessment and Remediation” process is better than a quarterly scan or “batch” process.

Continuous Security

“Continuous Compliance” is a fairly hot security topic. The word “continuous” has almost reached a buzzword status when it comes to information security topics like vulnerability scanning and application security.  So I decided to look around at some of the products and pages on the Internet that cover this topic. I found some explanations of what continuous compliance is and products that supposedly do it. However I didn’t find anything that went in depth to explain why continuous compliance is better in regards to security vulnerability discovery and remediation.

I felt that I knew why it was better, but I always like to do some research to validate my assumptions. I hope you do also.  Below I use “Continuous Vulnerability Management” as a synonym with “Continuous Compliance and Continuous Configuration Management.

Where Is The Answer?

I didn’t find the explanations and answers I was looking for in vulnerability management topic pages or security focused blogs. I remembered from my undergrad and MBA operations management classes that continuous is typically better than batch, so I figured business and operations management topics would hold the hard answers I wanted to find.

BINGO!  I found the answers on Operations Management and Real-Time Business Intelligence type forums and websites.

I’ll summarize my findings below.

Benefits of Continuous vs Batch

  • Once you start a continuous process, you can run it constantly. There is no need for setup and tear-down times. This removes the wasted effort of having to regularly spend time ramping up discovery and remediation efforts, and spinning those efforts down, just start them over again.
  • Continuous allows you to use less resources to complete work because the work is spread out as needed into smaller more manageable chunks. Instead of delivering a bunch of findings every few months, the findings are delivered or “pushed” immediately as they are found. Theoretically, this may allow vulnerability owners to just “work in” remediation work with other support tasks instead of having to focus on large lists of findings.
  • Continuous vulnerability management should be more scalable. Due to continuous vulnerability management requiring less overall resources at any one point, it should be easier to get more overall work done by allowing a person to break out a large amount of work into manageable chunks or more easily add resources to remediation efforts.
  • Continuous should cause less disruption to regular business operations and support. In order to support continuous vulnerability management, it must become a part of your normal operations and support. I can tell you from experience that when you deliver a large list of findings once every few months, the vulnerability owners act like it is a surprise each time for a few years, or at least hope you will forget about them.
  • Continuous vulnerability management should create less risk for business impacting issues due to configuration changes. This lowering of risk is due to vulnerability remediation work being completed in smaller chunks of work instead of trying to cram tens or hundreds of configuration changes or patches through in a short amount of time.  Experience shows this “cramming”  usually results from a large “batch” style list of vulnerabilities.
  • Similar to the reasoning above, continuous vulnerability management should have fewer errors than batch processing because remediation work is done in smaller chunks. The smaller work units should allow for more focus on each individual work unit, instead of the feeling of being rushed to get a large batch of work completed in a short amount of time.
  • Root Cause Determination should improve. In continuous vulnerability management you have much shorter amount of time between when a vulnerability is created and when it is found. It is simply easier to remember (or to track down) what someone did yesterday than what was done weeks or months ago.

Bottom Line? Compliance & Security Improves

  • All of the above reasons contribute to a cumulative improvement in the efficiency of a continuous process over a batch process in regards to vulnerability scanning and remediation. The cumulative negative effect of the individual latencies in a batch style or “90 day cycle” method of vulnerability management cause that method to be less efficient, less secure, and to cause more disruption to an organization than continuous vulnerability management.

So why doesn’t everyone do Continuous Vulnerability Management instead of Batch or Cycle Style Method?

There are some requirements you must meet in order to successfully implement a continuous vulnerability management program. These requirements are not usually cheap or easy.

  • Requires Constant or “Continuous” monitoring

Sound easy? It isn’t for large organization that may have millions of possible IP addresses. You must spend a lot of time and effort in order to get systems setup that can continuously monitor your entire vulnerability footprint. The larger your organization, the harder this requirement will be to implement.

  • Requires near real-time status updates

The ability to scan for vulnerabilities continuously and the ability to report on what has changed from one day to the next is getting much more feasible to accomplish. Some modern vulnerability management systems will keep track of what has changed for you. This “continuous” updating is needed to truly understand when vulnerabilities are found, and when they go away. Without this seemingly simple capability, you must stay in batch mode.

You also need the ability to throw out the vulnerability findings you don’t care about, and figure out which ones you do in a very streamlined or automated fashion. Otherwise you get overloaded with useless data, and the important findings get less focus.

A side-bar to this requirement is that you need reporting and analysis on your data.

Constant remediation can hide pervasive issues unless adequate analysis and trending is performed on discovered vulnerabilities.

Continuous is also all about continuous improvement. Without the ability to analyze and trend data, you are not leveraging continuous compliance/vulnerability management  methods. You probably are not getting the visibility or management support you need without this reporting.

  • Self-Service and Automation Required for Mid size or Large organizations.

This gets even harder. You must automate. Taking raw vulnerability findings from scanning systems and software and turning those into actionable items is very difficult.

If you have hundreds of thousands of systems, and hundreds of possible owner teams, how do you easily or quickly know who owns each vulnerability? If you have a CMDB that has all this mapped out or only have a few thousand systems then great, that is a good start.  Manual analysis of findings from spreadsheets and then sending those spreadsheets out to the owners is not continuous, it is a batch style scenario.  The only way to move from that batch style scenario is to apply logic and rules to your vulnerability data so that the owner for the item and the decision on whether action is needed is automated. Some type of ticketing system that flows the work down to the owners is typically needed.

Organizations with immature vulnerability management and security practices will probably need to remain in a batch style method of vulnerability management until they can meet the above requirements.

Downsides to Continuous vs Batch?

  • Some types of changes or implementations can be done more efficiently in batches.
  • Some organizations may perform better using a batch style method of vulnerability management.

Some of the other topics that seemed to back up the continuous approach..

PCI-DSS: Vulnerability Duration & Scan Frequency – Not Quarterly Scans.

Why Vulnerability Duration is the key metric in vulnerability management for high risk findings.

Some compliance standards (PCI-DSS) require quarterly scans of your external and internal networks that are “in scope” for your specific compliance related systems or networks.

Why Quarterly Scans?

Requiring quarterly vulnerability scans and remediation is an easy way to set a minimum standard for scanning and remediation of vulnerabilities. Quarterly scans should be considered a “low standard”, where continuous compliance and continuous vulnerability scanning are the widely accepted goal that companies should be working toward.

Why Not Quarterly Scans?

There are some fairly basic timing issues you run into requiring quarterly vulnerability scans and remediation that put an unreasonable burden on large companies while not truly improving security over other methods.

Scenario:

So the requirement is that you Scan AND Remediate vulnerabilities within a 90 day window. So what if it takes 2 weeks to run a scan on your several million IP addresses, and maybe another week or so to run your application scanning on 30 or 40 web applications and put together your list of findings.  You are now easily 30 days into your 90 day window.

Now you get the scan results, process the thousands of findings (because vulnerability scanners tell you everything from what ports are open to missing patches) to determine what truly needs to be worked on, and determine who needs to fix which findings. You are now easily 40 days into your 90 day window.

Next you communicate your findings to the folks that can actually resolve the findings and try to determine who is going to work with you and what they can plan to do, (because they always act like the findings are a surprise even though they have been getting them every 90 days for the past 3 years). This easily puts you 50 days into your 90 day window.

So no problem. We now have 40 days left out of 90 to

1) change requirements for release schedules,

2) make code changes and go through Q/A processes,

4) change priorities for entire infrastructure teams,

5) ask various application teams to go through testing and Q/A for webserver configuration changes.

6) Go through all the normal change control documentation and bureaucracy associated with any decent sized IT shop’s change control process.

7) Validate all of the vulnerabilities have been resolved.

Wait! So that 40 days (not business days, only about 28 business days, or around 5-6 weeks at best) doesn’t seem very long anymore. The more mature of an IT company you are, with more mature testing and Q/A and prioritization requirements you have around your business, the harder it is for you to stop on a dime and change priorities.

What is the alternative???

I feel the true intent of requiring a 90 day window for vulnerability scans and remediation is to ensure that you are regularly looking for and resolving security vulnerabilities.  I propose that measuring “Vulnerability Duration” is more important that requiring quarterly scans. I explain why below.

Vulnerability Duration?

Vulnerability Duration can be defined as the duration of time between when a vulnerability is found, and the time when it is resolved.

What is the difference between Vulnerability Duration and Quarterly scans?

The important difference is that requiring quarterly scans assumes that you have short window of time needed to scan and report vulnerabilities. However, for large companies this simply isn’t possible yet. So requiring quarterly scans and remediation requires a large company to typically only have a month or less to remediate vulnerabilities, because much time is needed to get “workable” findings, and run validation scans on the vulnerabilities within that 90 day window also.

This means that requiring quarterly scans only gives large companies about 30 days or so to react to vulnerabilities. Is this the true intent of the quarterly scan requirement???

Also, the quarterly scan requirement also allows vulnerabilities to exist in environments for over 90 days. If a vulnerability is created shortly after a scan, it can exist undetected for a full 90 days (or more) until the next scan in the next quarter is run.

What is the Point I really want to make??

Vulnerability Duration focuses on the true amount of time that a vulnerability exists in an environment, and doesn’t focus on an arbitrary 90 day window for performing specific actions of vulnerability management. Vulnerability duration focuses on the important stuff. It focuses on how long a vulnerability exists and keeps that duration small.

Measuring “Vulnerability Duration” requires the “Continuous Compliance” mindset. If I can scan  for vulnerabilities every 2 weeks or as often as possible, I have the ability to distribute scan findings much more often than every 90 days. This should create a much smaller window of time that vulnerabilities can exist in my environment because vulnerabilities are….

1) Being found much more quickly.. And

2) Vulnerabilities are resolved much closer to the time that they are found.

What does it take to pull this off?

In order for the focus on Vulnerability Duration to be effective you must be scanning for vulnerabilities as often as possible. This means as soon as one scan ends, the next begins. This constant scanning and reporting of vulnerabilities creates the time saving loop that should create a much shorter window of time that vulnerabilities exist.

Analogy – Leaky Boat

I have a leaky boat. The general estimate is that the boat can keep afloat for a week with a leak. So I only check for leaks once a week. Some weeks, no leaks at all. Some weeks, there is a lot of water in the engine room. On a really bad week, there are several leaks and the boat sinks. If I check for leaks and fix them every day there is much shorter amount of time available for any leaks to flood the boat and cause damage. Although not perfect, I think this analogy makes sense.

Why post this at all?

My whole point is that a PCI assessor,  PCI QSA, acquiring banks, and the PCI Council should consider that quarterly scans are a bare minimum that has been set. Requiring the outdated mindset of having to create quarterly reports can hamper the ability for companies to move forward with a “Continuous Compliance” mindset where they are scanning and remediating all the time and measuring vulnerability duration instead of focusing on this quarterly scan and report requirement.

Alternative to the PCI Quarterly Scan requirement?

Allow an alternative reporting method instead of quarterly scans only. Something like having a PCI ASV attest that scans are taking place more often than every 90 days and that the vulnerability duration of any PCI impacting findings do not reside more than 90 days. Anything open more than 90 days would require some type of documentation.

Vulnerability Scanning For Network Appliances

Are you shipping network appliances that haven’t been scanned for vulnerabilities?

I’m responsible for getting security vulnerabilities corrected or “remediated” at work. Keep in mind this is no small job since our network is probably one of the largest in the world.

I continue to be surprised by these network equipment manufacturers that are completely clueless about vulnerability management and the vulnerability footprint of their devices.   These devices are often shipped full of security holes from the factory.

Below I will list some very simple steps that every network appliance manufacturer can do to reduce their customer’s security headaches.

  1. Always run a vulnerability scanner against your device or appliance before you “finalize” the revision for testing. Fix the security holes then start testing.
  2. Ship your “default config” without services needed that expose or open up security holes. This is also known as “secure by default.”  This means instead of having everything the customer could possibly need already up and running, give them an easy way to turn on what they need.
  3. If your default shipping config exposes something that vulnerability scanners pick up on as a vulnerability, or even an informational exposure, Document This information. This will save your security folks work and make your company actually seem professional.
  4. Realize that the security of your appliance is your responsibility as the appliance manufacturer. Be proactive.

It is only a matter of time before some major breach occurs via some “appliance” that was shipped full of security holes from the manufacturer. How will your company reputation be damaged from the fallout?

Scans Versus Penetration Tests

What is the difference between scanning and penetration testing?

Those of us responsible for managing Vulnerability scanning and penetration testing often seem to get the same question over and over… What is the difference between a vulnerability scan and a penetration test?

You would think that this is not a difficult topic to grasp, but some folks really do struggle to remember the difference. I’ll lay it out here in the most simple way I know how..

  • Scan = Look for holes and issues on a network or website. Usually with some type of scanning tool.
  • Penetration Test = Exploit and Hack holes that you have found on a network. And see how far you can get. A penetration test often starts with a scan, but is not limited to just the scanning.

Some good scanning tools are..

McAfee Vulnerability Manager (used to be called Foundstone)

QualysGuard

Nessus

Many companies offer penetration testing services.  I’ve only had experience with a few, so my only advice is to make sure your contracts are well written and that you are careful when working with a small company.