BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles AI No Silver Bullet for Cloud Security, But Here’s How It Can Help

AI No Silver Bullet for Cloud Security, But Here’s How It Can Help

Bookmarks

Key Takeaways

  • Don’t believe the hype. The promise is that artificial intelligence is the Holy Grail for addressing the issue of how to secure all that data floating around in the cloud. Let’s slow down a bit. It might not solve every single problem.
  • AI is really good at some things and not so great at others. Sifting oodles of data to establish user patterns and compare that to what they usually do? It’s very good at that. Responding to novel threats? This capability might still be a few years off.
  • Many of the cloud security companies that claim superiority due to their AI skills are actually using machine learning, which is an older technology that is not “intelligent” and is actually a subset of full-bore AI.
  • The best use for AI in today’s cloud environment might be for organizations with gigabytes of incoming data daily. For low data flow companies, other solutions might be preferable.

 
Image by Pixa Hive

If you believe all that you read in the (industry) press, you’re probably of the opinion that AI can do basically anything including make you toast in the morning. As Cisco recently put it on their blog, as network engineers we often find ourselves "in conversations about very futuristic, and somewhat unrealistic AI-enabled scenarios. It can be quite entertaining – but we also need to remember that today’s AI technology is not a panacea for every networking ailment."

I’d go further. The problem with the hype around AI in cloud security is not limited to some CEOs wasting money on it. The problem is that, if cybersecurity engineers start to believe the hype, cloud security will be undermined by this.

In this article, I’ll look at the real role of AI in cloud security – the hype, the reality, and how we can resolve the gap between them.

The Hype

The promise of AI-driven cloud security tools is clear enough. In recent years, traffic across all networks and media has increased exponentially, so that in 2019 293.6 billion emails were sent every day. At this point, it’s difficult to see that anything other than a hyper-intelligent AI can identify incoming threats fast enough for them to be dealt with.

The fundamental idea of yoking AIs to cloud security platforms is that they can analyze network traffic and identify anomalous (and potentially malicious) activity way before a human can, and with much greater accuracy. This has become a ubiquitous idea in recent years, to the extent that such systems appear in our own introduction to cloud security architecture.

It’s also an idea that has taken off among executives. Capgemini recently reported that 69% of enterprise executives surveyed felt AI is essential for responding to cyber threats in a report on AI and cyber security last year. Similarly, in Cisco’s 2020 Global Networking Trends Report, more that 50% of leaders identified AI as a priority investment needed to deliver their ideal network.

This is great news for AI developers, of course. The problem is that, as far as I'm aware, there is no such thing as an AI-driven cloud security system.

The Promise

At this point, let me be clear. I’m not saying that there is not a place for AI in cloud security. In fact, in the last few years machine learning techniques have seen great success in many cybersecurity applications, including cloud security. This rapid and widespread adoption has been driven by perceptions – sometimes misguided – that AI systems are able to provide a number of key tools for cybersecurity engineers.

It’s worth looking at these capabilities and their level of development before deciding to use AI systems on your own cloud infrastructure. In many cases, the reality doesn’t live up to the hype. So let’s spend some time looking at what, precisely, AI can offer in terms of cloud security.

Big Data Processing

Image by Pixabay

One of the most promising – and certainly most developed – uses of AI in cybersecurity is to use AI systems to trawl through historical data in order to identify attack patterns. Some AI algorithms are very effective at this task, and can inform otherwise oblivious cybersecurity teams that they have, in fact, been hacked many times.
The primary value of this kind of system is seen when it comes to managing employee access to systems and files. AI systems are extremely good at tracking what individual users are doing and at comparing this with what they do typically. This allows administrators (or automated security systems, explored below) to easily identify unusual activity and block users’ access to files or systems before any real damage is done.

This kind of functionality is now widespread in many industries. Some cloud providers even ship it with their basic cloud storage systems. In many cases, in fact, an organization is not even aware that an AI is collecting data on the way they use their cloud service in order to scan this for unusual activity.

This type of tool, however, also represents the limit of what AI can do, in terms of cloud security, at the moment. Most organizations lack the tools to use AI systems in a more complex way than this. Even now, most organizations will rely on their cloud provider to put this kind of AI-driven system in place, and know little of the way that it works.

Event Prediction

The next step in deploying AIs for cloud security – or so goes the received wisdom – is to take the kind of pattern-recognition capability that AIs already have and extend this into a truly "intelligent" system that is able to make predictions about the safety of a system.

At the moment, these systems are somewhat underdeveloped, but the way in which they will work can be broken into two different types of predictive analysis.

The first type – which is less complex, and therefore likely to arrive much sooner than the second – is for an AI system to be given information about what kind of attacks are being seen in the wild at a particular time, and what kind of organizations are being targeted. Then, in the same way that neural networks are used to produce car insurance quotes based on huge, detailed datasets, an AI can make an accurate risk assessment for a particular company.

The second way in which predictive analysis could be used would be far more powerful, but is still some years off. At the moment, lots of organizations spend plenty of money on white hat analysis, where "friendly" hackers attempt to break into a system using any means possible. The promise of truly intelligent AI is that this kind of analysis could be automated, and therefore made much quicker and more efficient.

Of course, this second type of deployment raises a scary prospect – that in the attempt to create "friendly," white-hat style AI hackers, we will actually be creating a powerful set of hacking tools. This problem is one that is inherent to the attempts to develop AI security measures, has no easy solution, and is one of the reasons why the development of AI-base cybersecurity tools is so closely watched by governments (and militaries) around the world.

Automated Response

This final capability – that AI systems will be able to actively counter threats by taking measures against attackers – is dependent on AI-driven cloud security systems being empowered to directly affect the systems they are deployed on.

This is where the proponents of AI for cloud security somewhat overstep the mark. It is common, in the industry press at least, to hear claims that AI systems are able to "intelligently" respond to attacks in progress. This is something of an over-statement, in a number of ways.

The first is that, in the majority of cases, such systems merely make automated suggestions to administrators, who have the final say in what is actually done in relation to an attack. Typically, these automated recommendations are not even informed by ML or AI technologies, but are instead the outcome of fairly simple, explicitly coded rules.
Secondly, even where cloud security systems are able to directly work with the systems they are deployed on, their capabilities when it comes to responding to attacks are generally extremely limited. They might be able to block a particular user if they try to access a number of restricted systems, for instance, but this is generally the limit of their power.

There is, of course, a good reason for this. The deployment of automated systems that are able to directly affect the operation of corporate computer networks – no matter how "clever" these automated systems are – is a huge risk. A "rogue" AI could easily block dozens (or hundreds) of users and systems, and cost organizations millions of dollars in downtime and network recovery.

This is probably why there is, at present, no such thing as an AI cloud security system, at least not as commonly understood.

The Reality

Image by Wikipedia

Let me clarify, once more. While there are plenty of cloud security platforms that claim to implement AI, in reality most of these use a subsidiary (and much less powerful) technique: Machine Learning (ML for short). Now, ML is a powerful tool, but it is not quite the "intelligent" threat detection and response system that AI claims to be.

In addition, ML is actually a pretty old technology. When vendors claim that their cloud security systems work on "AI," what they generally mean is actually the kind of ML system that has underpinned SIEM, EDR, and XDR tools for a decade or more.

And before you accuse me of being a pedant, let me explain why this seemingly obscure distinction matters. ML systems of this type can be trained on the mind-blowingly large repositories of data on cyberattacks that we’ve built up over the years, and can get really good at spotting similar attacks. The problem is that, unlike truly "intelligent" systems, they are unable to spot novel threats.

As a result, any white hat group will be able to defeat them without too much effort, especially if they deploy the kind of social engineering (i.e. advanced phishing) that remains effective against the weak point in many organizations: the poor humans. This, unfortunately, is the reality of many "AI" systems today – they are not fully intelligent in the way they are thought to be, and in any case rely on humans to spot truly creative attacks.

The Compromise

What, then, is to be done?

Well, on a technical level the path forward is pretty clear. While ML systems can help cybersecurity engineers protect their networks against known threats, they should not be relied on. At best, they provide a rather crude, but impressively wide, form of defense. For this reason, cybersecurity technicians might better focus on standardizing their clouds for security, and ways to improve cloud security with business integration, so that they can get the most out of the AI tools they already have.

On a more philosophical level, Zach Winn over at MIT News has also thought about the future of AI-human collaboration when it comes to cybersecurity. As he puts it, one of the problems with ML is that "such systems can come with their own problems, namely a never-ending stream of false positives that can make them more of a time suck than a time saver for security analysts."

This means, as a recent paper (also from MIT) points out, that in many organizations the time cost of implementing AI cloud security infrastructure simply isn’t worth it, unless they are in a sector where they are typically dealing with gigabytes of incoming data every day. In other words, we should think of AI-based cloud security systems as having a fairly niche application – useful for companies and other organizations that see a number of similar threats, but of limited utility in other cases.

The Bottom Line

You’ll notice, of course, that this is a slightly conservative position, so let me be clear. I’m not arguing that AIs do not have a part to play in improving cloud security. They do, and they also have great potential. The problem comes when we start to rely on them, or believe that they are a panacea that can spot (or even stop) all threats. At the moment, the places where AI cloud security software is useful is actually pretty limited.

In other words, let’s stop sharing the hype, and focus on making cloud security platforms that allow humans to provide truly intelligent threat responses, rather than relying on the machines to do it for us.

About the Author

Bernard Brode is a product researcher at Microscopic Machines and remains eternally curious about where the intersection of AI, cybersecurity, and nanotechnology will eventually take us.

 

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT