Update 1 - I’m clarifying the definition of Advanced Persistent Threats (APTs) and Financially Motivated Actors (FMAs). I combined the two groups in the previous version. The content and focus of the discussion primarily centers on FMAs. APTs and FMAs can overlap in terms of TTPs, capabilities, personnel, countries, etc. What distinguishes them is motivation. FMAs, as their name implies, are financially motivated. APTs can have a number of motivations including financial, political, etc.
Twitter is a complicated social networking platform. Then again, which platform isn’t? I’m fairly new to tweeting, but I’ve already found that it can be a tremendous resource when it comes to receiving up-to-date information on new techniques, upcoming talks, or nifty vulnerabilities. What I haven’t found it useful for is having discussions, especially those surrounding a controversial idea. This has manifested in my feed over the past month with the discussion around the release of Offensive Security Tools, primarily ignited by Andrew Thompson (@QW5kcmV3).
Offensive Security Tools
Before you read anymore, please read Andrew’s blog post on the Unrestricted Release of Offensive Security Tools as I’ll be using verbiage defined in that article. (Based on some of the discussions on Twitter where people have straw manned Microsoft’s operating system as an Offensive Security Tool (OST), I have confidence that not many people have actually read the article.) Andrew’s primary argument is that the unrestricted release of OST is causing more harm than good to the world. He’s passionately championed this idea on Twitter, pointing to data such as that available from his employer (FireEye/Mandiant) about actual breaches. Reaction to his argument has ranged from agreement to vitriolic outrage. (I do concede that some of the negative feedback may be due to his tone, which has not always been the friendliest.) Opposition has accused him of gatekeeping, exaggerating the magnitude of the problem, or just being plain wrong. Others have argued that this is a solved problem and wonder why it’s even being discussed. Later in this article, I will attempt to address and refute some of those arguments.
The second article I’d like you to read is, on the surface, unrelated to the discussion at hand. It’s a recently published piece by James Hatch titled My semester with the snowflakes. In the article, James discusses some of the assumptions he made about the students of Yale University prior to his first semester at the institution and how these assumptions were summarily shattered. One of these assumptions regarded the term “safe space”. James had long regarded the term to mean a place to discuss ideas without having one’s feelings hurt. He learned the truth is quite the opposite. “What she [his fellow student meant by ‘safe space' was that she was happy to be in an environment where difficult subjects can be discussed openly, without the risk of disrespect or harsh judgement.” Throughout the rest of this post, I’d like to adopt the same definition.
Twitter is Not a Safe Space
At this point, I’m hoping you can see where I’m going with this section. The recent tone of the dialogue surrounding Andrew’s argument shows me that we are more likely moving further from a solution than towards one. Individuals on both sides of the argument have been quick to judge each other’s motives and credentials and have sometimes done so with a healthy dollop of disrespect. Twitter is not a “safe space”. Your most likely reaction to that sentence is “well, no shit”. Why, then, is so much of this discussion happening on Twitter? It clearly is not the place to discuss a topic that is clearly controversial. It is true that one can reach a global audience instantly; however, it encourages short, witty responses instead of a deep dialogue. Where then, do we turn? Conferences are another common vehicle for presenting ideas. Unfortunately, they too do not lend themselves to the idea of having a dialogue. Conferences are primarily unidirectional communication: a speaker lectures an audience. There may be time for questions, but its brevity does not encourage in depth dialogue. Additionally, conferences artificially restrict themselves to those with the resources (time, money, influence) to attend.
At this point in time, I am not aware of a currently existing solution for having such discussions. During my time in the Air Force, teams would sponsor “working groups” with attendees from a variety of squadrons with different perspectives to gather and discuss complicated and controversial ideas. Such a solution in the commercial space would require significant sponsorship and buy-in from employers who would lose valuable resources for a period of time as they worked on a problem affecting a community. But it’s an idea.
Another term I want to define as I’ve seen it come up quite a bit is “gatekeeping”. I’ll go with the Urban Dictionary definition which is “when someone takes it upon themselves to decide who does or does not have access or rights to a community or identity.” I’ll address gatekeeping later in this article, but I wanted to clarify the definition I’ll be using.
Arguments Against the Restriction of OST Release
At this point in time, I’ve seen a number of arguments against Andrew’s position. I’ll be addressing each of these in their own section.
- The problem of restricting OST is too difficult.
- We have already solved this problem.
- The problem is blown out of proportion. There are two components to this.
- FMAs aren’t using public OSTs.
- If OST wasn’t released, FMAs would develop their own.
- Restricting the release of OST will gatekeep the offensive security community.
In addressing Andrew’s arguments, I’ve seen quite a few logical fallacies. I’ll address some examples I’ve seen here, so that others can be aware. For those who have used these arguments, I would encourage you to review the list and think to yourself if you’re committing one of these fallacies when engaging in debates.
Straw Man Fallacy
Windows is used by attackers - therefore you’re proposing we restrict the release of Operating Systems. This is obviously ludicrous, therefore, the argument to restrict OST is invalid.
Here, someone is misrepresenting Andrew’s argument by providing an easy to refute example. It is “superficially similar but ultimately not equal version of [Andrew’s] real stance”.
The majority of the community thinks restricting OST is wrong. Therefore, your proposition is invalid.
Just because the majority agrees on something, doesn’t mean it’s correct.
The False Dilemma Fallacy
Either we restrict all tools that can be used for attackers (including Empire, BloodHound, and SysInternals) or we restrict none of them.
Again, this misrepresents Andrew’s argument. If you read his earlier article and his recent postings, there is a spectrum of a tool classes that could be considered for restriction. It’s not an all-or-nothing argument.
Anecdotal Evidence Fallacy
I haven’t seen attackers use publicly released OSTs; therefore, it’s not a problem.
A personal anecdote does not provide a preponderance of evidence to refute an argument.
Texas Sharpshooter Fallacy,
APT XYZ, Government ABC, and FMA 123 do not use OST. Therefore, your argument that these tools are a problem is invalid.
This argument is cherry picking data and ignores the other adversaries that are using OST as part of their activities.
Personal Incredulity Fallacy
I don’t understand the various aspects of OST release. It’s too complicated and can’t be correct.
One’s ability to understand an argument does not affect the validity of the claim.
“No True Scotsman” Fallacy
No real hacker would argue for the restriction of OST. Hackers break down systems - they don’t create them!
No true APT or FMA would use OSTs.
This is one of my favorite logical fallacies. It relies on universal generalizations to “inaccurately deflect counterexamples”.
The Problem of Restricting OST is too Difficult
This argument supposes that Andrew is correct with regards to the problem of releasing OSTs in that attackers are using them, and it is negatively impacting the security community. They contend that setting up a system to control the release of OST is too difficult. “Pandora’s box has been opened.” Therefore, it is not worth pursuing solutions.
While I do agree with these individuals that the problem is difficult, I disagree with their conclusion that it is not worth pursuing a solution. I argue that, in information security, our responsibility is to reduce the number of attacks and how costly those attacks are as much as possible. If the data shows that OST is being used on a high number of high-impact attacks, we thus have a responsibility to reduce that as much as possible.
“Just because something is hard doesn’t mean it’s impossible.” - Lysa TerKeurst
We Have Already Solved the Problem
I’ve seen this from a number of users replying to Andrew’s article. Their comments are something along the lines of “Imagine talking about the release of OST in 2019…” or “Are we really talking about this gain?”. I have two responses.
- I would consider this gatekeeping. You’re creating an old guard and saying “we have previously decided the answer. You, not a member of the old guard, have no right to question our previous decisions.” Instead of being excluding, be open. Explain your decisions and why those decisions were made. Invite people to the conversation.
- Things change. While your decisions may have been correct when they were made, we cannot conclude that every decision will remain true for eternity. We must continuously question our assumptions and pressure test our decisions to see if they continue to stand up. It may have been that economies have changed and more attackers are relying on OST than before, and thus this conversation is worth revisiting.
APTs and FMAs Aren’t Using OST
The data speaks for itself. Very few organizations have access to the volume and quality of data that Andrew has. While I’m trying to avoid the “appeal to authority” fallacy, I do believe Andrew when he states APT33 is using Empire, Metasploit, and Mimikatz. I address the logical fallacy regarding “no true APT” earlier. I do not believe this argument holds weight, but I would appreciate a fresh perspective if you disagree.
If OST Wasn’t Released, FMAs Would Just Develop Their Own
This is my favorite argument because I think it is the most valid. I’ll decompose it into the various sub-arguments. Let us hold that this argument is valid. From that, people who hold this position draw a number of conclusions.
- Since FMAs would just develop their own OSTs, I would rather they use publicly released ones so that I can develop signatures from tools I am aware of.
- Since FMAs would just develop their own OSTs, the harm I do in releasing a OST is minimal since the capability would exist anyways.
- (If you have another reason that I missed, please reach out and I’ll update the article.)
The first argument supposes that organizations as a whole will be able to improve their security. Don’t forget the anecdotal evidence fallacy. Just because your organization is equipped to quickly respond and detect new threats does not mean the majority (or even 25%) of organizations have those capabilities. I think one thing the security industry doesn’t yet universally understand is a VAST majority of companies can barely manage basic security compliance. I agree with GossiTheDog based on my years of working with a number of organizations as well as from anecdotes I’ve heard from peers in the industry. That does not mean I’m correct. I believe the correct approach would be for a survey of a large number of organizations across a variety of sizes and verticals to assess if they have the capability to respond to new tools and signatures as well as how quickly they deploy them. I foresee that very few will have this capability.
The second argument contends that these tools would exist anyways, so there is no harm in releasing them. The first argument is partially correct - FMAs would invest in developing OST if they were not publicly released. However, they would HAVE to invest in developing OST if these tools were not released. As Andrew states in his article, adversaries, just like blue teams, have finite resources. If they invest in developing these capabilities, it inherently means that they’re not investing in other areas. The relationship between information security and risk management is inextricable. Our goal, as security professionals, should be to make it as difficult as possible for adversaries to achieve their goals. We can do that by not only making our defenses better but by reducing their capabilities.
Restricting the Release of OST Will Gatekeep the Offensive Security Community
This group argues that releasing OST helps with inclusion for the offensive security community. They argue, from what I can tell, that restricting these tools creates artificial barriers where those with the tools arbitrarily decide who and who does not have access to these capabilities.
What follows is probably my most controversial opinion. I agree with this argument, but I also do not believe that this is a bad thing. I believe that offensive security professionals do not currently exist. To quote Wikipedia, “Major milestones which may mark an occupation being identified as a profession include:
an occupation becomes a full-time occupation the establishment of a training school the establishment of a university school the establishment of a local association the establishment of a national association of professional ethics the establishment of state licensing laws”
While one could argue that some certifications (such as CISSP) can define an information security professional, offensive security does not currently meet these definitions. The bar to declare yourself an offensive security professional, start a company, and begin selling services is very low.
When we look at some examples of professions, we may begin to notice a trend: medicine, accounting, law, architecture, etc. Mistakes are not tolerated in these professions. People could die or go to jail (or both). I believe that information security (to include offensive security) meets this same bar. Incorrect and misinformed judgements and decisions can result in disastrous effects. I am not proposing that “the haves” wall themselves off in an ivory castle from the “have nots”. I am saying that gatekeeping in and of itself is not bad especially when it is done to protect the quality of work, so consumers have confidence in the product and services they are procuring. An organization purchasing the services of an MSSP or Red Team should have confidence that the company and its employees have adequate experience and capabilities to provide the services they are describing. There also does not need to be just one gate. Just like in medicine there are a variety of paths, certifications, schools, and specialties, so too could a system exist in information security. Just as some medicines are over-the-counter and some are “gatekept” by prescribing physicians, so too could certain tools be restricted to those who have shown the technical and ethical capacity to responsible exercise them.
If you agree or disagree, I’d really appreciate thoughts and discussions on this point.
You Didn’t Address my Argument
Please reach out and I’ll update my article accordingly.
We can’t move towards a solution for a problem until we agree that there’s a problem. Andrew clearly has data that shows that tools such as PowerShell Empire, Responder, etc. are being used in real-world breaches and costing companies time and money. If you rush to conclude that that’s an acceptable cost, then I encourage you to reconsider - not that you’re wrong, but that you refuse to even have your beliefs challenge. We are engineers, scientists, operators, analysts, managers, leaders, and executives. But we are not zealots and no idea should beyond question.
Finally, we are on the same side. Offensive security or defensive security - if you consider yourself a white hat, we have the same objective: making the world a more secure place. Please keep that in mind when discussing tough ideas with your peers. Mutural respect goes a long way.