I did a short presentation for a group at the Northwest Arkansas Community College on “Recon” in regards to penetration testing. I’ve attached the Libre Office presentation here.Recon_Presentation_NWACC
Short presentation on Transport Layer Security (SSL / TLS) Usage and how Enterprises must change as it changes..
On Feb 4th, 2014, I gave a high level presentation to our Northwest Arkansas ISSA chapter regarding Payment Card Security. Unfortunately, the roads were icy that day, so there were only a few of us in attendance.
I felt like this was a presentation that both technical and non-technical attendees would find interesting due to all of the credit card security topics that had been in the news over the holidays.
Below is a LibreOffice Impress document with the contents of the presentation.
There are several factors to consider when determining the times to run vulnerability scans.
Is this the first time you have run this scan?
Is the scan going to run against an ecommerce site?
Do you have standing approval from your operational areas to run a scan?
Do you have security monitoring and logging systems that will alert on the scanning?
Contact the administrators of your websites to determine the best times to run a vulnerability scan.
Most site admins will know their peak periods of website activity, it is best to avoid those periods for routine scanning simply due to the scans increasing the load on the site.
Scans can often cause increased error logging and alerting. So you need to be extra diligent and careful the first time you run scans. Assume that you may break things the first time.
- Talk to the stakeholders for the systems you are scanning to determine the best time to scan.
- Notify the stakeholders and any support areas that may be involved if there are issues or alerts generated by the scan.
- Follow your normal change control management procedures and treat initial scans like a system change.
One piece of information that your stakeholders will need to know is the source where your scans will originate. They may want to whitelist or ignore those ip addresses in their monitoring.
If you are able to perform vulnerability scanning on your network and e-commerce sites without anybody noticing, then you likely have a gap in your ability to detect malicious scanning also. 🙂
The Northwest Arkansas chapter of the Information Systems Security Association (ISSA) had a meeting on April 2, 2013.
I gave a short presentation on vulnerability scanning, what to do with vulnerability scanning results, and some tips on implementing a vulnerability scanning program.
The slides are linked below..
How To Understand a Vulnerability Scan Report – Part 2 – The Network Port
Part 2 of a multiple part series explaining vulnerability scan data and nuances of how that data should be used. Part 1 was about IP addresses.
- Network Port
- This is the network Port identifier number (1 through 65535) where the vulnerability scanner found a vulnerability.
- The port number is not always provided in some vulnerability scan reports, although it is a critical piece of information, as will be discussed below.
- The teams that own the systems or applications with vulnerabilities will often be unfamiliar with network ports until they do some further research on their application or system.
- In part 1 of this series it was discussed that a system can have more than 1 IP address. The level of complexity increases with ports because each IP address can have up to 65,535 tcp and/or udp ports.
- It is very unusual for most IP addresses to have more than 100 or so ports open, so many vulnerability scanners will consider a system with many ports open to be a firewall configured to respond on all ports as a defensive measure.
What does a port number tell me?
- A listening port tells you that some piece of software is accepting network socket connections on a specific network port. Your vulnerability is on the software that is using that port.
- The port number should be your starting point to determine which service or application is listening for incoming socket connections. This service or application port listed in your vulnerability scan is what typically has the vulnerability.
- There are many common ports used that are easy to identify.
- Once you know what the program or service is, your next step is often to contact the person or team responsible for managing that service or application.
- One nice thing that most vulnerability scanners will do is give you the text response that the vulnerability scanner got from the port when it initially fingerprinted that port.
- This text info is valuable because it will often give you the response header/banner/response from the service and often has enough information for you to understand what the service is, even if you had no previous information about that port.
Okay, that’s nice, but if I see a webserver vulnerability, I already know to call the webserver folks.
- It’s not quite that easy. Run a port scan (all ports) on a heavily used server and you might be surprised how many http/https protocol servers are running.
- Even dedicated webservers will often have many different instances of a webserver running, each one on different ports. Being able to tell the owning team the specific port that had the vulnerability finding is critical to being able to determine the source of the problem.
- If the vulnerability is application related, knowing the port is likely how you will determine the application team that needs to remediate the vulnerability finding. The team that manages the webserver may know which application instance is running on which port, and can direct you to the proper application team.
Load Balancing can throw you off.
- Network Load Balancers can take traffic directed at one port on an IP address, and redirect that traffic to different ports on different IP addresses.
- This can obviously cause some issues for you since you will see the port on the Virtual IP address on the load balancer as having the vulnerability.
- This is a more common scenario you will face when scanning servers from outside a DMZ, from the Internet, or on a hosting or cloud environment.
- It is critical for you to have the network load balancer configuration and be able to trace which IP addresses and ports are actually responding with vulnerabilities. Without this information you are stuck at the Virtual IP address without being able to go any further to find the true IP and port that has the vulnerability.
CVE ID Syntax Change – My Feedback
Today (Jan 22, 2013), I saw that Mitre had released a public call for feedback in regards to proposed CVE identifier syntax changes.
I took a few minutes after reviewing their proposed choices and sent a response. If you work heavily in vulnerability management or information security I would recommend you review the proposed changes at the link above and give your feedback.
The text from my feedback on the propose changes is below.
I think Option B is the best option..
Reasons for Option B
– Option B provides the clearest path forward for programs that use or parse CVE numbers because it…
- Allows backward compatibility (software shops can continue using current parsing logic & display formats) It only has to change if/when needed.
- Allows companies to update their CVE data field parsing algorithms to a best practice of taking any numbers found in the digits field without requiring them to expect the padded zero formats. Expecting and forcing a new format forces changes throughout any existing code.
- Allows simpler algorithm for parsing new or old CVE format data. If you force padded zero’s, then programs will have to base their parsing logic for the number field based on the year field, or be based on the number of digits in the field. If you choose option B, the logic can be the same for the old and new format (just accept whatever is there), and not really care about the number of digits initially. This might allow for an easier adoption by code that currently parses CVE data. (option C would require even more changes)
– Yes, option B does not force the hand of every software developer to immediately update code and logic for your changes, which might actually be your saving grace. This puts the responsibility on the software developers and companies to comply with the format changes, but does not force a change on them that breaks functionality and their product otherwise.
This takes the pressure off Mitre that will come from “breaking” money-making products for companies, and puts it back on the companies to make the changes.
Why Not A?
-Depending on a certain number of digits (6) with leading zero’s forces programs to immediately update algorithms and display fields before they are compatible. 1 year is not much time for applications heavily integrated into enterprises. I doubt you will get good adoption for your new format in the requested 1 year timeframe regardless.
Why Not C?
-Same reason as “why not” for reason A. And you are now adding yet another field to be parsed that adds very little effective value.
Why Not B?
– The reasons posted on Openwall as shortcomings for reason B are valid, except that I don’t really buy the whole “it’s not as forward compatible” logic. It actually could be the most forward compatible option if your guideline is that you must accept any number of digits given.
Today I gave a presentation to our local Northwest Arkansas ISSA chapter on the topic of Malware Analysis Tools and handed out some of Lenny Zelter’s cheat sheets.
I’ve attached the LibreOffice Presentation file to this post to allow easy access. Malware Analysis Tools Presentation
The NorthWest Arkansas ISSA chapter typically meets the first Tuesday of each month at Dink’s BBQ in Bentonville, Arkansas.
If you are not familiar with the subject of “Root Cause” or Root Cause Analysis I would encourage you to read about it on Wikipedia before reading the rest of this post. In short, Root Cause Analysis is trying to determine the earliest factors of the Cause of an event or issue and not simply treating the Symptoms of an event or issue.
Nothing New Here..
I worked in IT Infrastructure and studied business for years before I started working in IT Security. I’ve found that most operational management principles apply to Information technology and Information security processes in nearly the same way as they apply to manufacturing or other business processes.
Root Cause Analysis is yet another operations management topic that directly applies to information security vulnerability management.
Some use cases for Root Cause Analysis in Information Security Vulnerability Management.
(Root cause analysis for these 4 cases below will be broken out into another post and linked here when complete)
- Why are system vulnerabilities there?
- Why do system vulnerabilities continue to show up?
- Why are coding weaknesses in my code?
- Why do coding vulnerabilities continue to show up in our code?
Isn’t this Common Sense?
No. See below for why…
Treating or Correcting Root Cause is Harder than Treating Symptoms
Treating symptoms is nearly always quicker and easier than resolving root cause.
– Treating symptoms gives an immediate short term relief. This short term & quick fix creates a very direct emotional response. If you are in pain, and a doctor gives you a shot to numb the pain, that feels great right? But what if they never fix the reason you are having the pain in the first place? You could keep needing those shots every day. After a while you will probably decide that putting in the effort of fixing the “root cause” of your pain is a better option.
Resolving the root cause to an issue typically doesn’t have that immediate emotional feeling of relief because it takes longer. It takes planning, discipline, and often a strategic patience to influence others to help resolve the root cause of an issue.
I think that treating symptoms is the more “natural” reactive response to a problem. The more proactive and mature response to an issue is to take the time to determine and analyze root cause.
– Reboot it? A great example of this issue is very common in IT Infrastructure or Development operations. An application area or systems team has an application or system that starts behaving strangely or stops working. The common response is to just reboot the system or restart the process. This may resolve the problem that time, but it is likely that the problem may re-occur, or just be an earlier indicator of a larger problem. If you take the time to gather documentation (memory dumps, logs, etc..) for root cause analysis before rebooting the system or restarting the process you will be able to research root cause. This is more difficult initially because the process of getting memory dumps and logs may take a while longer than simply restarting something. If you never address the root cause, these symptoms will keep stacking up and drive up your support and maintenance costs as well as impact availability.
– Patches – Is it easier to install a bunch of patches on some systems, or to implement a solid process and resources to ensure that patches are always correctly installed and validated on systems? Installing a patch is treating a symptom. Implementing an effective patch management process is treating root cause.
Some may argue that root cause of patching starts all the way back at the processes of the operating system development. That is true, however you always have to realize that there are some root causes that are out of your control. In this case, you can effectively treat the root cause reason of why the patches are missing, but not why they are needed in the first place.
–Social Issues – Social and political issues most often have symptoms treated because resolving root cause is typically assumed to require behavior changes or other changes that are considered too difficult or unpopular to implement.
Should my focus always be on fixing Root Cause?
Now we are getting into opinion, but I think that root cause should be identified and how to resolve the issue should be analyzed. Choosing to address root cause or not is a business prioritization issue just like any other project that should be evaluated. However, the choice to Not address root cause needs to be documented and known. Why? Because the symptoms of that root cause will continue, and they should be expected to continue.
I think that taking the parallel approach of treating some symptoms while working on remediating root cause is a reasonable approach. Unfortunately, since the temptation to focus on treating symptoms is so strong, it often takes a very determined person to ensure that root cause is addressed.
- If you are getting BADSIG errors when updating your Backtrack install.. (Like below)
Reading package lists... Done W: GPG error: http://32.repository.backtrack-linux.org revolution Release: The following signatures were invalid: BADSIG AB6DA34B475A6B7F BackTrack Repository Admin <firstname.lastname@example.org> W: GPG error: http://all.repository.backtrack-linux.org revolution Release: The following signatures were invalid: BADSIG AB6DA34B475A6B7F BackTrack Repository Admin <email@example.com> W: GPG error: http://updates.repository.backtrack-linux.org revolution Release: The following signatures were invalid: BADSIG AB6DA34B475A6B7F BackTrack Repository Admin <firstname.lastname@example.org>
- You can run the following commands to clean things up and allow your updates to start working again. I found this on someone else’s website in regards to Ubuntu updates, so it doesn’t have anything directly to do with Backtrack as far as I can tell.
- I created a little script called fixsig.sh to resolve this issue as it seems to happen a lot to me. Probably because of something I’m doing causing the issue.
root@bt:~# cat fixsig.sh sudo apt-get clean cd /var/lib/apt sudo mv lists lists.old sudo mkdir -p lists/partial sudo apt-get clean sudo apt-get update root@bt:~#
Hope this helps someone!