• Subscribe

Hacked by a machine

Speed read
  • Machine learning will play a pivotal role in the future of cybersecurity
  • Automated tools are a boon to both hackers and security professionals
  • AI exposes unique risks but also provides unique benefits

When your computer’s been hacked, it can feel intensely personal. As your heart sinks and your blood pressure rises, you might imagine the thief sitting at a keyboard in a darkened room far away, laughing as they scoop up your most important data. 

<strong>Not the adversary you are looking for.</strong> Popular imagination depicts a hacker as a thief behind a keyboard in a darkened room. But these days, thanks to the rise of AI, the hacker making off with your data may not even be human. But there may not be a person behind it at all. These days, an adversary can save a lot of time by using machine learning, says Ian Molloy, an AI security researcher with the Information Security Group at IBM’s Thomas J. Watson Research Center.

“For phishing and spear-phishing, there are different ways of pulling in information intelligence about a given person,” says Molloy. “Recent works show that I can crawl Twitter and LinkedIn profiles with machine learning and then use that information to craft emails that would try to convince you to provide information that you wouldn't necessarily give up otherwise.”

Alarming as this is, it’s important to note that hackers aren’t the only ones with access to artificial intelligence (AI) tools.  Security professionals also increasingly rely on machine learning to streamline defensive efforts.

This ‘combat by AI’ was the focal point of Molloy’s speech at June’s ISC High Performance 2019 conference in Frankfurt, Germany. It’s a balance we’ll need to reckon with if we truly desire security and privacy.

Thinking like a hacker 

The practice of cybersecurity revolves around thinking like a hacker to anticipate their moves. Combating AI-assisted hacking is no different.

<strong>Small change, devastating impact.</strong> Even small alterations to data can force an AI model to misclassify data—imagine the impact that could have on a self-driving car’s ability to identify pedestrians. Courtesy Statistical Visual Computing Lab/UC San Diego.“Adversaries choose machine learning tools for many of the same reasons we do,” says Molloy. “When we think about how attackers are going to start using machine learning, they're going to use it to make themselves faster, more efficient, and stealthier.”

Along with using these tools to boost existing attacks, Molloy warns that a hacker might also target a machine learning system itself. One such assault is what’s known as an evasion attack. This is when an adversary makes such a small change to a piece of data that it’s impossible for a person to notice. However, this tiny alteration can force an AI model to misclassify data.

Imagine how this could impact a self-driving car’s ability to identify a pedestrian in its path.

Another way a hacker can attack a machine learning model is with a method called poisoning. In this scenario, the adversary gains access to the data used to train an AI system and uses it to reduce the accuracy or performance of the trained model—with potentially devastating results. 

“To begin, they can tweak the data such that the model will always have poor performance,” says Molloy. “The second thing they can do is they actually insert a malicious Trojan or a backdoor into the model.”

Clearly, machine learning is a supremely effective tool in a cybercriminal’s arsenal. However, focusing solely on the abuse of this technology would be a disservice to the people who intend to use it for good.

Swinging the double-edged sword

Despite these relatively new and unique forms of hacking, it’s important not to despair. Security professionals are working hard to use machine learning to its full potential while also trying to understand how to combat it.

<strong>Combat by AI.</strong> Ian Molloy of IBM’s Thomas J. Watson Research Center’s Information Security Group presented the Machine Learning Day keynote at ISC High Performance in Frankfurt, Germany in June 2019.A good example of this is IBM’s Adversarial Robustness Toolbox. This open-source software library is meant to defend neural networks. Its specialties include defending against poisoning and protecting against backdoor attacks such as trojans. It includes an interactive page designed to teach people how different attacks and defenses can alter machine learning outputs.

Additionally, it would be foolish to overlook the value machine learning provides to security professionals simply by providing alerts based on activity that could be considered out of the ordinary, such as an abundance of login attempts. As Molloy explains, these tools are built around increasing efficiency.

“Where machine learning really comes into play is when you have an alert,” says Molloy. “Normally, an analyst team would spend minutes to potentially hours investigating, looking at IP addresses and doing queries.”

Molloy continues, “[The tool] provides all the information you need to know, contextualized around that specific threat and alert. You can then give it to the analysts and they can apply their domain knowledge to explore further.”

Humans have historically relied on machines to make our physical work easier. Now, we are developing the capability to allow them to make our mental work easier too.

Technological advances like machine learning can often feel like a roller coaster. One minute you’re hearing about how AI security systems are protecting your financial information, the next you learn about a hacker using these tools for personal gain.

Through the ups and downs, it’s important to remember that a technology can’t be defined by how people use or misuse it. We’ll need to keep an open mind if we want to improve on the best of machine learning while protecting against the worst.

Read more:

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2021 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.