Several times in the news lately I’ve heard about the Bird Flu Research controversy. Each time I hear about this controversy I want to compare it to the recent controversy around SCADA IT Security research being published directly to security testing tools companies. I don’t think it is a stretch to compare these 2 topics. While there are some obvious differences, many of the arguments are similar..
To Publish or Not to Publish?
One of the main concerns around the bird flu research is whether the results and methodology of the research should be fully published.
The premise used to justify publishing vulnerabilities in the IT security industry is that exposing IT security vulnerabilities and making them easier to exploit forces companies to patch these vulnerabilities and create more secure software and systems over the long run. I believe this premise is true. Most companies would not enhance the security measures in their code or systems unless necessary. The cost for developing software is mainly in the initial development, anything afterwards is maintenance cost that is typically a cost center and not a revenue generator. So any company will attempt to keep their maintenance (creating code patches) cost as low as possible.
This same premise has been used to defend publishing the research on the bird flu. A nation that wanted to do its own malicious bird flu research could do that if they wanted, so we should understand and be prepared for that scenario. So just like we should improve software or systems, we should improve our ability to identify and respond to a malicious strain of bird flu.
Does the research help or hurt??
The concern around the bird flu research information is that malicious actors could use the information to create some kind of biological weapon. The very same type of concern exists around IT security research. If you show a bad guy how to exploit a vulnerability on a system, they are more likely to use it, or it makes greatly reduces the time and effort needed to create their own exploit.
Whenever you reduce the effort (time, cost, risk) of exploiting a vulnerability or showing how to develop a stronger virus, you initially will increase the risk of people using that information for their own purposes.
So does it help or hurt? Initially it hurts. The publishing of the vulnerability forces companies to dedicate resources to the analysis, development, and deployment of patches. After the initial pain, it helps the companies by ensuring their code and systems are more secure. The long term effect that *should* happen is that development companies should change their processes to ensure they develop code security, consumer companies should ensure they have enough resources to keep systems patched, and the whole cycle should gradually become a less hectic normal maintenance routine.
For Bird Flu research it can help to ensure that public administrations prepare and have plans in place for dangerous virus outbreaks.
What is the real question that needs to be answered?
Is the initial increase in risk caused by releasing research information worth the mid-term and long-term reward of improving the products or being more prepared for a lethal virus outbreak?
Unfortunately I wasn’t able to find any real data to support whether security research disclosure truly helps improve security over the long run. I think that it does, and I think the IT Security industry believes it does, however this seems to largely be “common wisdom” and not based on any hard facts. (please correct me if I am wrong)
Conflict of Interest?
There are some obvious conflicts of interest that create “grey” areas in IT Security research. When a security researcher works closely with vulnerability testing companies to incorporate working exploits for vulnerabilities they have found instead of working with the companies that publish the software with those exploits, it makes me question their motives. Also, if my company sells security testing software, then having a check for a vulnerability that no other company has and/or has no patch is a competitive advantage. Is the security company truly first concerned with the security of their customers or with their sales?
For virus or disease research I don’t see anything like this happening that I know of. I suppose a researcher could work directly with a pharmaceutical company, but the whole concept doesn’t apply very well to disease research and pharma development.
How to Resolve *Some* of these Questions
Security testers and security companies that deal with exploits and vulnerabilities should be very clear about what responsible disclosure guidelines, code of ethics, or methodology they follow when disclosing vulnerabilities.(if any) Customers of security services or security testing software should ensure that they purchase from companies or researchers that align with their own code of ethics.
So what about Bird Flu research? Why can’t the National Academy of Science or World Health Organization provide guidance around research regarding increasing the virulence of a virus or disease? Research labs or Universities should clearly define what guidance or methodology they follow around this type of research and it should be a condition of disclosure when applying for funding and grants.