From 150 million compromised MyFitness pal accounts to Russian infiltration of the US power grid, our personal and national security continues to be at risk from bad actors with both financial and political motives.
Despite the US National Science Foundation (NSF) spending nearly $80 million annually on cybersecurity research, real-world results are few and far between. So why are those dollars going to waste?
According to former NSF Cybersecurity Program Director Anita Nikolich, one big reason is the almost complete failure of communication between academic cybersecurity researchers and the hacker community.
We sat down with Nikolich at the PEARC18 conference in Pittsburgh to find out more.
What makes you say that hackers and academics need to talk more?
In my 20-plus years in the industry, I would sit in talks at academic conferences, then go to a hacker con a month later and the talk would be almost the same. I wondered, “Couldn’t we advance security, make bigger discoveries, and make software more secure or datasets more robust if the two sides talked to each other?”
We have these two types of people, with different incentives, different motives, and different venues. If you could bring them together, you could create something that’s a lot bigger than what either could do independently.
So these communities are talking about the same things and even want to solve the same problems. Why aren’t they getting together?
On the academic side, they are completely not allowed to do offensive research. The research is supposed to be defensive. But in order to protect systems, sometimes you’ve got to break into those systems, and that is explicitly not permitted with their grants.
Because many academics haven’t been operators, they haven’t been on the security side, they haven’t been security engineers, they haven’t had to defend a perimeter or run a firewall. They don’t understand that they need that feedback of what the realistic threat model is.
On the hacker side, they know that if they want to make systems more secure they’ve got to hack into things. The problem is sometimes there’s a lack of following a rigorous scientific method. If the hacker community were to apply just a little more rigor and make experiments more repeatable, maybe those results could be fed back into the academic side and vice versa.
In many ways, it’s a mismatch of incentives. For academics in cybersecurity the incentives are to produce papers and to get grants. The incentives on the other side are a little bit different: It’s either to make the world a better place or find an exploit—there’s varying reasons.
Because of this mismatch, the two don’t have the right words to communicate, even though the research they do is often eerily similar. They just don’t have the means— a venue, a publication, or even the right language—to communicate with each other and leverage each other’s experiences.
Are there examples of problems that could have been avoided had there been more communication between the two communities?
One is definitely in the area of cars. If you look at some exploits that were released at big hacker conferences about certain vehicles, and the software that powers them, and the vulnerabilities there, the car manufacturers did not want to hear it. They didn’t really take the input. But vehicles now are just big computers with thousands of computer chips on them. That’s one area where some input early on could have made a big difference.
Medical devices are another. There are examples of people being able to break into insulin pumps very easily. The medical device manufacturing community are very resistant to getting input from those who might break into the system. An insulin pump is worn by a person, you’re not necessarily thinking that person is a hacker. I think getting that mentality in the manufacturing process could have avoided problems up front.
Would manufacturers pay more attention if academics were included in the conversation?
I think manufacturers would listen to both academics and hackers if an argument was made that took business and cost factors into account. Oftentimes in security, we say this is the right thing to do, but we don’t consider that there are a whole lot of people hours and cost and design process and budget and so on.
That’s one of the problems we can solve. If you take into consideration that it’s not just a security problem, it’s a larger business and manufacturing problem, and if you paint it with that mindset, I think manufacturers would listen to both academia and the black hat community.
You’ve been taking this message to both communities. Best case scenario, what do you hope will happen?
I’ve seen a couple academics come to big hacker cons like DEF CON, and they’re very well received. I’d like to see more conferences that are more neutral. I think it would be really interesting to have a neutral venue where both sides can present whatever their version of the same work is.
I would also love to see the government fund micro grants or have some academics use some of the black hat community. It doesn’t have to be big money but to say “I will accept the risk” that they are doing this as a part of my research. I think those are two easy things that could make big strides.
Read more: