By Michael Daniel, CTA President & CEO

For years, the cybersecurity community argued about how researchers should disclose newly discovered vulnerabilities in information technology products, as well as how companies should handle being told about vulnerabilities in their products.  After considerable debate and not a little rancor, the industry settled on a set of principles for that process, known as responsible or coordinated vulnerability disclosure.  Not everyone accepts these principles, and some people knowingly violate them, but most companies, researchers, and institutions follow them.

As the disclosure process has become regularized, another problem has become more prominent: post-disclosure communications about the disclosed vulnerability. For example, how should a company communicate to customers about publicly disclosed vulnerabilities in a competitor’s product? We lack principles for such post-disclosure communications. The result is confusion and uncertainty about what behavior is acceptable and what is not.

Developing principles for post-disclosure communications will require answering multiple questions, including:

  • What defines public disclosure?
  • When and how should a company talk about vulnerabilities in a competitor’s product in a research blog?  
  • What defines a threat research blog as opposed to a thought leadership or a marketing blog?
  • Should researchers give the “target” company a heads up? If so, how much of a heads up, and to what stakeholders?
  • Should companies use a vulnerability in a competitor’s product as a marketing and sales opportunity?
  • How should the media cover a company making pro-active disclosures about vulnerabilities in its products?
  • Does regular vulnerability disclosure indicate that the company’s coding practices are bad or that they have weak quality control? 

Right now, we do not have widely accepted answers to these questions. For example, casual readers might assume that a higher number of public vulnerabilities indicates poorer security practices. However, the US Cybersecurity and Infrastructure Security Agency (CISA) has noted that a high number of reported vulnerabilities does not necessarily signal bad development practices. In fact, it can signal the opposite: more disclosures often signal a robust ongoing product review and dedication to internal discovery of vulnerabilities instead of relying on external researchers. [1]

As part of its on-going maturation process, the cybersecurity industry should develop ethics rules for discussing vulnerabilities in competitor’s products. Many professions, like medicine, law, or architecture, have standards about how they handle negative information about competitors or products. The cybersecurity industry should follow suit for several reasons. 

The first reason is straightforward: all IT products have vulnerabilities. There’s the old saw about there being two types of companies: those that know they have been hacked and those that don’t. In software, the equivalent would be that there are two types of software: the kind where you know the vulnerabilities and the kind where you don’t. The statistics back up this thinking.

In 2023, CVE.org published 28,961 vulnerability records – averaging slightly more than 79 per day – and as of the publication of this blog, known vulnerabilities total over 228,000.[2]  Those vulnerabilities are distributed across more than 166,000 hardware and software products, including cybersecurity.[3] In fact, almost every major cybersecurity company that produces their own software has a 2023 entry in the CVE database. In other words, no vendor should be casting stones, because someone else’s vulnerability problem today will become your vulnerability problem tomorrow.

A second reason is that we want vendors to have a robust internal discovery process, so that vulnerabilities are found and fixed as soon as possible, preferably before anyone exploits them. Since an honest analysis will inevitably reveal vulnerabilities, a company looking for vulnerabilities will find more vulnerabilities than a company that isn’t looking. We do not want to penalize companies that conduct such robust searches; in fact, we ought to reward them because it will make the entire ecosystem safer in the long run.

For example, almost 80% of all Fortinet vulnerabilities published in 2023 came from internal discovery, demonstrating a clear effort to build a culture of proactive, transparent, and responsible vulnerability disclosure. Unless a cybersecurity company can provide clear evidence that it has a meaningfully lower vulnerability rate due to its development practices, a consumer of cybersecurity products should assume that other vendors have comparable vulnerability rates – but they either don’t know what their rate is, or they aren’t telling you. 

The cybersecurity industry should adopt a set of principles for how to handle communications about vulnerabilities disclosed by competitors. Like responsible disclosure principles, such principles should be worked out over time through debate and discussion, applying a lens of what most benefits end users and the digital ecosystem. To get that discussion started, I would suggest that some possible principles include:

  • Cybersecurity research teams can publish information about how adversaries are exploiting a vulnerability in a competitor’s product, but they should provide notice to the competitor about the research one to two days before publication. 
  • Cybersecurity companies should not use a vulnerability in a competitor’s product for commercial advantage through sales and marketing purposes. 
  • Those who engage with reporters covering cybersecurity should emphasize that all IT products have vulnerabilities and that finding them and remediating them proactively is a good thing, and not necessarily indicative of poorer quality compared to other companies. (In the future, when secure by design has become the industry standard, disclosing a larger number of vulnerabilities than a common baseline might well indicate a quality control problem. I look forward to living in that world, but we’re not there yet). 

Other principles will certainly emerge as we wrestle with this issue. However, the cybersecurity community should take the next step towards maturity and raise the bar on this important issue. We have proven our capability to do so in the past on other issues.  Having a clear set of principles for handling vulnerability communication and analysis would reduce the amount of “fear, uncertainty, and doubt” and enhance the professionalism of the industry.  That outcome should be something we all agree on. I look forward to discussing this topic again soon.


[1] Secure By Design (cisa.gov) https://www.cisa.gov/sites/default/files/2023-10/SecureByDesign_1025_508c.pdf, p10.

[2] https://www.cve.org/About/Metrics

[3] SecurityScorecard: https://www.cvedetails.com/product-list/firstchar-B/vendor_id-0/products.html

Back to News