PCI Penetration Testing

The Payment Card Industry Data Security Standard requirement 11.3  requires that you perform annual external and internal penetration testing that also includes application testing. Below is a guide on how to handle your testing requirements.

*Disclaimer* Every company and situation is different when it comes to PCI. Always communicate with QSA(s) when planning your testing to ensure that you are meeting all the needed requirements.

Scope –

First you have to decide how you are going to define the scope of what is going to be tested. This includes which networks and applications are to be tested. This scoping process may also include deciding what type of testing will be done such as black-box, white-box, goal based, etc… The PCI-DSS leaves this scope determination up to your company.

  • Be completely transparent on the possible ingress and egress points to your PCI data.
  • The goal is to find out your weak points, not to stump the penetration testers.
  • If you are limited by cost on the number of IP’s that can be penetration tested, then I would ask if the testers could start with the 3 sets of information below and come up with their own limited list of IP’s to test, or do this yourself.

I’ll refer to these 3 sets of information below as the 3 core vulnerability information sources.

  1. Full vulnerability scan (unauthenticated or authenticated)
  2. Firewall rule exports
  3. Network diagrams

External Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Perform specific application testing on your sites that handle any type of PCI data.

Once your sites start coming up clean regularly, you should consider moving to authenticated testing if you aren’t already doing that type of testing.

Internal Network –

  • Start with the 3 core vulnerability information sources to determine your testing targets.
  • Hand over information on any specific solutions and dataflows that handle PCI data.
  • Use those firewall rule exports so you don’t forget to test the servers that have connectivity into your PCI data network if it is separated from your other networks.

Applications

  • Focus on any applications that handle PCI data.
  • Do not limit testing to only PCI data handling applications. You should also do penetration testing on system management applications, software distribution applications etc..
  • The best application penetration test is to do a source code review in coordination with functional testing.

Remediation

Once you have all the results back from the testing your real work begins.

  • Take a risk based approach to determine what needs remediation.
  • There is flexibility in your remediation under the PCI-DSS, so chose a logical and consistent manner in how you choose to remediate issues.

As always, it would be good to get your approach approved by a QSA if possible.

Final Thoughts…

The goal of penetration testing is to simulate a malicious (or curious) hacker’s attempt to access sensitive data or systems. You should view this as an opportunity to improve security and not a threat.

Be transparent, use good testers, and try to change your testers every couple of years.  Penetration testing is often more of an art than a science, so different people will often find different holes.

Vulnerability Management – Continuous vs Batch

Reasons why a “Continuous Vulnerability Assessment and Remediation” process is better than a quarterly scan or “batch” process.

Continuous Security

“Continuous Compliance” is a fairly hot security topic. The word “continuous” has almost reached a buzzword status when it comes to information security topics like vulnerability scanning and application security.  So I decided to look around at some of the products and pages on the Internet that cover this topic. I found some explanations of what continuous compliance is and products that supposedly do it. However I didn’t find anything that went in depth to explain why continuous compliance is better in regards to security vulnerability discovery and remediation.

I felt that I knew why it was better, but I always like to do some research to validate my assumptions. I hope you do also.  Below I use “Continuous Vulnerability Management” as a synonym with “Continuous Compliance and Continuous Configuration Management.

Where Is The Answer?

I didn’t find the explanations and answers I was looking for in vulnerability management topic pages or security focused blogs. I remembered from my undergrad and MBA operations management classes that continuous is typically better than batch, so I figured business and operations management topics would hold the hard answers I wanted to find.

BINGO!  I found the answers on Operations Management and Real-Time Business Intelligence type forums and websites.

I’ll summarize my findings below.

Benefits of Continuous vs Batch

  • Once you start a continuous process, you can run it constantly. There is no need for setup and tear-down times. This removes the wasted effort of having to regularly spend time ramping up discovery and remediation efforts, and spinning those efforts down, just start them over again.
  • Continuous allows you to use less resources to complete work because the work is spread out as needed into smaller more manageable chunks. Instead of delivering a bunch of findings every few months, the findings are delivered or “pushed” immediately as they are found. Theoretically, this may allow vulnerability owners to just “work in” remediation work with other support tasks instead of having to focus on large lists of findings.
  • Continuous vulnerability management should be more scalable. Due to continuous vulnerability management requiring less overall resources at any one point, it should be easier to get more overall work done by allowing a person to break out a large amount of work into manageable chunks or more easily add resources to remediation efforts.
  • Continuous should cause less disruption to regular business operations and support. In order to support continuous vulnerability management, it must become a part of your normal operations and support. I can tell you from experience that when you deliver a large list of findings once every few months, the vulnerability owners act like it is a surprise each time for a few years, or at least hope you will forget about them.
  • Continuous vulnerability management should create less risk for business impacting issues due to configuration changes. This lowering of risk is due to vulnerability remediation work being completed in smaller chunks of work instead of trying to cram tens or hundreds of configuration changes or patches through in a short amount of time.  Experience shows this “cramming”  usually results from a large “batch” style list of vulnerabilities.
  • Similar to the reasoning above, continuous vulnerability management should have fewer errors than batch processing because remediation work is done in smaller chunks. The smaller work units should allow for more focus on each individual work unit, instead of the feeling of being rushed to get a large batch of work completed in a short amount of time.
  • Root Cause Determination should improve. In continuous vulnerability management you have much shorter amount of time between when a vulnerability is created and when it is found. It is simply easier to remember (or to track down) what someone did yesterday than what was done weeks or months ago.

Bottom Line? Compliance & Security Improves

  • All of the above reasons contribute to a cumulative improvement in the efficiency of a continuous process over a batch process in regards to vulnerability scanning and remediation. The cumulative negative effect of the individual latencies in a batch style or “90 day cycle” method of vulnerability management cause that method to be less efficient, less secure, and to cause more disruption to an organization than continuous vulnerability management.

So why doesn’t everyone do Continuous Vulnerability Management instead of Batch or Cycle Style Method?

There are some requirements you must meet in order to successfully implement a continuous vulnerability management program. These requirements are not usually cheap or easy.

  • Requires Constant or “Continuous” monitoring

Sound easy? It isn’t for large organization that may have millions of possible IP addresses. You must spend a lot of time and effort in order to get systems setup that can continuously monitor your entire vulnerability footprint. The larger your organization, the harder this requirement will be to implement.

  • Requires near real-time status updates

The ability to scan for vulnerabilities continuously and the ability to report on what has changed from one day to the next is getting much more feasible to accomplish. Some modern vulnerability management systems will keep track of what has changed for you. This “continuous” updating is needed to truly understand when vulnerabilities are found, and when they go away. Without this seemingly simple capability, you must stay in batch mode.

You also need the ability to throw out the vulnerability findings you don’t care about, and figure out which ones you do in a very streamlined or automated fashion. Otherwise you get overloaded with useless data, and the important findings get less focus.

A side-bar to this requirement is that you need reporting and analysis on your data.

Constant remediation can hide pervasive issues unless adequate analysis and trending is performed on discovered vulnerabilities.

Continuous is also all about continuous improvement. Without the ability to analyze and trend data, you are not leveraging continuous compliance/vulnerability management  methods. You probably are not getting the visibility or management support you need without this reporting.

  • Self-Service and Automation Required for Mid size or Large organizations.

This gets even harder. You must automate. Taking raw vulnerability findings from scanning systems and software and turning those into actionable items is very difficult.

If you have hundreds of thousands of systems, and hundreds of possible owner teams, how do you easily or quickly know who owns each vulnerability? If you have a CMDB that has all this mapped out or only have a few thousand systems then great, that is a good start.  Manual analysis of findings from spreadsheets and then sending those spreadsheets out to the owners is not continuous, it is a batch style scenario.  The only way to move from that batch style scenario is to apply logic and rules to your vulnerability data so that the owner for the item and the decision on whether action is needed is automated. Some type of ticketing system that flows the work down to the owners is typically needed.

Organizations with immature vulnerability management and security practices will probably need to remain in a batch style method of vulnerability management until they can meet the above requirements.

Downsides to Continuous vs Batch?

  • Some types of changes or implementations can be done more efficiently in batches.
  • Some organizations may perform better using a batch style method of vulnerability management.

Some of the other topics that seemed to back up the continuous approach..

Your Internet Presence and Vulnerability Mgmt

If you get put in charge of vulnerability management for a large organization with many internet facing websites, you may run into some roadblocks on

1) Determining who owns what websites,

2) What servers host which websites.

3) What virtual IP’s load balance to which internal webserver hosts.

4) Which different outsourced entities have ownership over different websites and IP ranges.

5) Getting a listing of your total internet facing IP ranges.

6) Determining which websites and IP ranges are hosted by your company, and which are 3rd party.

7) Determining which websites process any PCI or PII data.