Transcript
Shuman Ghosemajumder: I have been interested in AI for as long as I can remember. This is the very first book that I read on AI. It was purchased by my dad for me when I was 10 years old, from Radio Shack. It had no mention of phrases like machine learning or data, and certainly no mention of concepts like transformer models. The industry as a whole has been thinking about AI for a very long time, and there's been so much transformation over the course of the last several decades, but in particular in just the last few years. You've probably noticed that everything is AI now. I've had my teams walk through the show floor of some of the big security conferences and take some pictures of the booths that are advertising what these companies do now, and here's a collage of what it looks like. There are so many different ways to be able to say that your product is AI enabled or AI boosted, or AI infused is one of my favorites. I bought a toothbrush recently that has artificial intelligence in it. I don't know exactly where the artificial intelligence is, but apparently that makes it a better toothbrush.
One of the best illustrations that I've seen of the ubiquity of AI in society today and throughout the industry is this cartoon that basically summarizes what is going on with everyone generating content using AI and then saying, that's way too much content. Why don't we summarize that using AI as well? That'll save us a lot of time. You've probably seen some of these incredible statistics that AI infrastructure building is now even larger than plans for buildings that humans are going to be occupying. There are some holes in this story. For example, we may be seeing patterns where AI use drops precipitously when students aren't using it for their primary application.
Where Is AI Leading Us?
You may be wondering as a result of all of this, where exactly is AI leading us? I have good news. There is a definitive answer to where AI is leading us. It was produced a few months ago from the Financial Times. They put some data together and what they concluded was that AI is either going to be our salvation or it's going to destroy all of civilization, or something in between. Definitely one of those three things and nothing else. AI is a great and terrible term. I think that part of the problem associated with it is that it sounds like it's doing so much. Of course, we can blame Alan Turing for making us so excited about this area in the first place when he published his seminal paper, Computing Machinery and Intelligence. What an amazing title for an academic paper.
That's like going back hundreds of years and if Isaac Newton published a paper called math. Just being able to have something that's so broad is only possible when the field doesn't exist yet and you're inventing it. That's exactly what Alan Turing did. He proposed the idea that is now called the Turing Test. He proposed it in the context of what he called the imitation game. Would it be possible to create a computer system that's so powerful that it would imitate a human being successfully to another human observer? The Economist, a number of years later, was very skeptical about this.
One of my favorite phrases from their dismissal of Turing's idea was that there is no practical reason to create machine intelligences indistinguishable from human ones. Basically, there's no reason that we need to have the entire field of AI. My favorite is the second part of the quote that people are in plentiful supply. Should a shortage arise, there are proven and popular methods for creating more of them. Don't be too hard on The Economist. As Yogi Berra said, making predictions is hard, especially about the future.
There are many confusing terms that are associated with AI. I think that what this has resulted in is people generally conflating it with AGI. You look at all of these terms like machine learning and deep learning and artificial intelligence and artificial general intelligence, and I would say that people are most informed about AI and their feelings about AI not from any technical source but from science fiction and from stories that they have consumed over the course of their lives from society as a whole. That's extremely confusing for folks, and it results in whatever technology seems sufficiently advanced today, as Arthur C. Clarke would say, fooling us into thinking that they are the magic of AGI.
The one thing that we have to realize is that none of us are quite as smart as we think we might be in every single field. George Carlin, I think, really illustrated this best. I think that the problem is that we're able to see through hallucinations and we're able to see through errors when AI is generating content in our own field. As soon as we ask AI a question that is outside of our area of expertise, all of a sudden, it's much harder for us to be able to see those hallucinations. There's a principle that describes this, and it's called the Gell-Mann Amnesia effect. It's named after Nobel laureate Murray Gell-Mann, who would read an article in the newspaper about physics, and he would rant and rave about how terrible the article was and how many errors there were. Then he would turn the page and read an article about finance or about politics and assume that that reporter had done their homework. Clearly everything was accurate in that. This is something that we all fall into in terms of a cognitive bias.
Generative AI is Great at Fooling Us
AI today, generative AI in particular, is so great at being able to fool us. When we first saw generative adversarial networks come into view, we saw fairly harmless applications, like sticking Nicolas Cage's face in places that it shouldn't be. It didn't seem like it was that realistic or that convincing, so people weren't that alarmed by it. Then we started to see a little more sophistication arise. Let's say that you take someone who is a really good Tom Cruise impersonator, like Miles Fisher, and now you add generative AI on top of that. Because he can nail his voice and his mannerisms, we were able to see just a few years ago very realistic-looking Tom Cruise deepfakes that really started to catch everyone's imagination. Just a couple of months ago, we saw where this has led us, which is Sora being launched by OpenAI and allowing anyone to create highly realistic videos.
For example, Mark Cuban, if you were at QCon, might say something like this. "Hey QCon AI conference, this video is totally fake. Don't believe anything on the internet anymore". Mark Cuban gave all Sora users the ability to use his image and his voice to create whatever videos they want. You can now basically puppeteer Mark Cuban to say whatever you'd like. Users of Sora immediately jumped on this, and they started to use all kinds of copyrighted characters, which there were no guardrails against at the time, to be able to create videos that many of the copyright holders had a big problem with. The copyright holders quickly responded. Recently, Disney announced a billion-dollar deal with OpenAI while simultaneously suing Google for not having a deal with them, effectively. This is something that leads a lot of people to asking the question, how important is copyrighted content to generative AI as a whole? I have an illustration for you.
If we go to a generative AI model that is reliant on copyrighted content and compare it to a model that isn't using copyrighted content, what does it look like? For example, if we go to Midjourney, which has very few guardrails, especially historically, against the use of copyrighted content, and we give it the prompt, "Chewbacca Reading", you get something that looks highly realistic in terms of a representation of the character Chewbacca from Star Wars sitting there reading. Now, if we go to Adobe Firefly, which was specifically trained without copyrighted content, what you get can be at best charitably described as homemade Chewbacca. This is essentially the tradeoff that we have in terms of all of our generative AI models. If there's copyrighted content in there, we can get something on the left. Without the copyrighted content, we're going to get much less satisfying results.
In just the last few months, there was an uproar over the idea of an AI-generated actress, Tilly Norwood. We had actors and directors and filmmakers from all over the world decrying the idea of an automaton that didn't have any of the humanity associated with human actors, representing the same types of emotions and the same types of acting on screen.
The idea was, how are you going to have the same level of creativity, the same imaginative ideas that we see in film and TV without the humans themselves? To that, I would say, here's another way of looking at it. Here is the level of creativity and imagination that humans have actually produced for many years on the Hallmark Channel. When you start to see the pattern of how humanity has been producing a lot of content, you start to realize that there are a lot of things that AI models can extrapolate and interpolate from in order to be able to really threaten jobs that are being performed by humans today. Maybe we're not talking about the threatening of somebody like a Tom Hanks or Julia Roberts, but there are lots of working actors and directors and screenwriters that are producing a certain level of work that AI can run with to some extent.
This is one of the reasons that in just the last couple of days, Merriam-Webster declared slop to be the word of the year. I take one exception to the definition that they provided of slop, which is saying that it's low-quality content that's produced by AI. The problem with saying that it's low quality is that it sounds like you're going to be able to easily identify it. It sounds like something that is not competing with human content. The reality is there is a lot of content that's already out there that people can't distinguish from human-generated content that's coming from AI.
For example, you've probably never seen a gorilla wrestling with a python before. I certainly hadn't. When this came up on YouTube Shorts, I was immediately amazed by this, and I was thinking, when did this happen? Where did this happen? It turns out it never happened at all. There's an entire YouTube Shorts channel dedicated to pythons and gorillas and other primates fighting, and because it's so amazing to look at, it has millions of views.
In fact, there are thousands of channels that are producing AI-generated content on YouTube, on TikTok, and on many other platforms. We may be thinking about AI slop in the context of, in a few years' time, this is something that we're going to have to worry about, and we should try and educate ourselves in terms of how we're going to be able to identify that content. The reality is we're consuming it already. In tests that we've done, I'd say about 20% to 30% of the content on the default feed on YouTube Shorts, as well as on TikTok, is already AI-generated.
One of the amazing things that's happening at the same time is that a lot of real content from movies and TV shows that had nothing to do with AI-related production are actually going through filters that make them look more AI-generated. That, in turn, makes it harder to tell the difference between real content and AI-generated content in the future. When you go to YouTube or you go to other websites and you view ads, many of those ads are now AI-generated as well. If we take a look at YouTube in particular, one of the things that we see is so many ads that are AI-generated, representing celebrities like Oprah and Ben Carson and many others, that Oprah actually had to put out a statement saying, this is not me. I'm not endorsing these random pharmaceutical products or these random home remedies, but they look and sound exactly like Oprah. Out of the context of understanding that this is an AI-generated video, the vast majority of people simply consume that content and they don't realize what's going on.
If you Google, Tiananmen Square Tank Man selfie, you literally get hundreds of copies of this image that went viral, which of course is AI-generated. There was no such selfie. There were no selfie cameras that were available at the time to be able to take a picture like this, and yet future generations Googling this will probably assume that this was a real photo that was taken because it's not only in Google Search results, it's at the top of Google Search results.
One exercise that I did with a group of university presidents was I went to Nature's website, Nature being one of the most prestigious academic journals in the world, and I did a search for artificial intelligence on the theory that anyone who is engaged with AI in any form is probably an AI enthusiast and using other AI tools. I pulled up this paper about cervical cancer and ran it through GPT-0, and what we discovered was that the abstract in the first paragraph came back as 100% AI-generated.
That leads to the question, how much AI-generated content is in peer-reviewed journals already and we just don't realize it, and what exactly does it mean? Because it doesn't necessarily mean that this content is being generated wholesale by AI. It could simply mean that it was cleaned up from a grammatical perspective by AI. It could simply mean that the AI was used to be able to generate a first draft and then that was fact-checked by humans. The problem is we don't know, and what we can see is the widespread use of AI tools. This brings me to one of my favorite quotes from Winston Churchill, that a lie gets halfway around the world before the truth has a chance to get its pants on.
My favorite thing about this quote is that Winston Churchill never actually said it. This is a great illustration of a learning that came from MIT professor Sinan Aral, where they did a study that showed that lies spread six times as fast on social media as the truth does. You might ask yourself, why is that the case? It actually makes sense that if you're the purveyor of a lie, then you're trying to propagate it as widely as possible, as quickly as possible. If you consume that lie and you spot that it's actually false information, it can often outrage you, and then you share it with a bunch of your friends saying, look at this lie. Everyone on both sides of it is actually engaged in making it go as far and as wide as possible. This is one of the reasons that social media, to a large extent, has just turned into a giant rage circus.
Disinformation Automation
I created a framework that describes what happens with this type of misinformation and fraud that basically goes through these stages for any type of content. What it consists of is, in the first stage, it requires a great deal of effort and talent and resources to be able to create a convincing fake. In the case of the Nicolas Cage's face going into different movie clips, that was an example of a way of being able to automate something that previously would require Hollywood special effects. It wasn't something that anyone could do. It actually required many days of computing power in order to be able to do effectively at the time. All of that has now changed. You can not only go to a system like Sora and generate a video in a matter of maybe about 60 to 120 seconds on average, but you can go to Grok and give it a single frame of a video or a still image, and it will generate a video of that image for you in under 60 seconds in a lot of cases.
That's bringing us with video and audio to stage three, where you can have one individual or entity producing vast amounts of content. We're already there when it comes to text. We've seen it in the context of websites that are low quality, that are trying to attract AdSense viewers and monetize through AdSense. This is something that we saw 20 years ago at Google. That's something which has only become more sophisticated with the introduction of generative AI. When we think about how generative AI is used in this context, we can also realize that misinformation that's presented can be extremely subtle. When you go to ChatGPT, even the most recent models of it, and you ask it fairly simple counting questions, like for example, I asked it, how many js are there in the last name Ghosemajumder? Which I assume is something that no one has ever asked before on the internet. It comes back with a very confident answer and shows its work. You can see here that it's not even pointing at letters. It's pointing at the spaces between the letters while it's miscounting.
If there's one thing that you take away from this talk, it's please stop using ChatGPT or any generative AI as a calculator. It may seem like it's able to do amazing things from a math perspective. It's able to score really high on the International Math Olympiad questions. In reality, it is just simulating those behaviors and it's not actually doing any real computing. If you ask it to create a highly detailed diagram, that diagram at first glance can look extremely impressive, but then you start to notice that there are a few things that it has misunderstood.
Since we're in the New York Academy of Medicine, let's take the example of something that might be a bit more serious. Would you be interested in using an AI system that was performing vibe surgery? There are real consequences to getting certain details wrong. When we look at where this information comes from, one of the things that we see is that AI is getting its facts from some unexpected sources. Wikipedia, of course, is full of a lot of high-quality information, but it's also full of a lot of misinformation. There are hoaxes that have lived on Wikipedia for years before they were discovered.
Of course, Reddit is full of misinformation, and yet it appears to be one of the top sources for content that generative AI models are trained on. Here's one thing that we found which was another subtle form of misinformation. This is an article that I wrote in my column for Inc. Magazine. Within an hour of it being published, we saw this article appear on an Argentinian website in Spanish. It is basically taking the original article and translating it into Spanish and adding a few more keywords to the prompt in order to generate what looks like coverage of that article. This has been done so convincingly that this publication is actually a Google News source. Anyone reading that article in Spanish would have no idea that this is actually a form of AI slop that has been generated and isn't an original article.
One of the things that I was fascinated by recently was a venture capitalist reached out to us, and this is someone that I've known for a while, and said, we'd love to have a conversation about Reken. When I went into the conversation, they said, we feel like we have a pretty good idea of what you do. I said, really? Because we're in stealth and we haven't shared what we do. How exactly do you have this idea? The answer was they had gone to a chatbot and asked, what does Reken do? It had gotten an answer. I did the same thing, and the chatbot came back with a bunch of things that are kind of in our space, and I was really curious, where did it get this not quite correct information?
Of course, one of the innovations that we've seen in the last couple of years is the introduction of chatbots like Perplexity that have sources for the content that they're generating. That produces a great deal of trust, because how on earth could the content be wrong if it actually has a source that it's coming from on the web? Indeed, in this case, there was a source. I was curious, what was the source that they had for this misinformation about our company? It turns out the source was an AI generated website. This is an example of model collapse in action in a way that people wouldn't realize. If you're not familiar with the subject matter, because how could you be? You're asking the question, tell me about a company that I have no information about. Then all of this is going to look highly credible from the original website, which is being cited, to the AI generated summary. That's how the VC got the wrong idea, and that's how a lot of people can get the wrong idea about a lot of things.
GenAI Is the Ultimate Cybercriminal Tool
Where this leads us to from a risk perspective and a security perspective is that generative AI is really the ultimate cybercriminal tool. A few years ago, we saw generative AI being used to be able to clone voices, and that was in a fairly simplistic form. This has only gotten more sophisticated. In the past year, we had the Arup case, where an employee of this engineering firm in Hong Kong got onto a Zoom call and had a conversation with several of their colleagues. Then agreed to transfer $25 million at the request of their CFO, only to discover afterwards that it wasn't their CFO that they were talking to or any of their colleagues. It was a bunch of real-time, deepfake Zoom representations of them.
This technology is only going to get more advanced. We started hearing about these kinds of examples a few years ago. The reason that this is such a threat is because AI at its core is really just automation. It's really the ultimate form of automation, and cybercriminals are constantly automating. Let me give an illustration of how cybercriminals have been automating for the last 10 years in a way that people don't have a lot of visibility into. When people think of cybercriminals, they often still think of someone in a hoodie in their parents' basement trying to figure out a way into a server, and that's not what cybercrime has looked like for 2 decades. Instead, it's highly commoditized and federated, and there are organizations of cybercriminals that work with each other in order to be able to create high levels of automation.
Every single time you see a big data breach announced, you probably think, I need to go to that website where the data breach happened and change my password. That's only the beginning of what happens with those data breaches. Those usernames and passwords then become part of a corpus that cybercriminals use to be able to breach completely unrelated websites and accounts, and they do that through something called a credential stuffing attack. Here's a piece of software that we discovered in the dark web called Sentry MBA. It looks like just a standard Windows application, but this is actually an application built by a specialized group of cybercriminals for other cybercriminals to be able to plug into botnets that they commandeer in order to create large-scale credential stuffing attacks. Here's what it looks like on a real-life website.
If you're familiar with what web traffic generally looks like, especially on a retail website, you're used to seeing this diurnal periodicity of users accessing the web more when they're awake than when they're asleep. This is what a typical week might look like on any given website, but this is actual data from one of the largest retailers in the world. When we went into their website initially, we did not see this pattern. Instead, we saw a pattern that looked like this, and that was a bit weird. We're not seeing organic users behaving the way that organic users typically do. When we started to distinguish between the automated traffic and the human traffic, we saw that the human traffic was actually there, but the vast majority of the traffic was actually automated.
On a 24/7 basis, they were getting hit with cybercriminals who were automatically taking stolen usernames and passwords from other websites and testing them against their login form to see which ones were valid. Because of the fact that users constantly reuse their passwords, there was typically about a 1% to 2% success rate that they would have, which would allow them to take over thousands of accounts en masse, and this is something that we see in basically every single industry.
What's going on behind the scenes is a very high level of automation that the vast majority of society had no idea about. Generative AI now allows us to automate this even further. You might be wondering, aren't there protections that are available to be able to protect against bot-related activity? There have been a number of attempts that have been tried in the past, the most famous being CAPTCHA, which you may not realize is actually an acronym that stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart. CAPTCHAs have been trying to distinguish humans from bots for many years.
A few years ago, Google did a study because they were wondering, how have CAPTCHAs fared? How have they ended up in terms of their efficacy? When we look at all of the time that's spent solving CAPTCHA, that's a really important question. Because we want to know that this is an effective mechanism and that it's worthwhile to engage with. What Google found was that humans had dropped down to a 33% solve rate with the typical synthetic distorted text CAPTCHAs that we see all throughout the internet. AI had actually skyrocketed in terms of its ability. Machine learning based OCR had a 99.8% solve rate.
This difference has only gotten exacerbated in the following years. There are cybercriminal services that are specialized in just helping other cybercriminals solve CAPTCHAs. Some of these services offer group discounts and all kinds of customer support because they want to be good cybercriminal businesses to the other cybercriminals. If you take CAPTCHAs today and you run them through generative AI systems, they pose no barrier whatsoever for them. If you're using CAPTCHA today on your website, you're basically doing the exact opposite of what you should be doing. You're introducing friction for real users, and it's no barrier at all for cybercriminals.
What exactly does this mean in terms of automation for cybercriminals? You might have heard about the IRS phone scams that affected more than 400,000 people in the United States over the span of just a few years. In the vast majority of those cases, in order to be able to steal any money from individuals, the victims had to have a conversation with someone in the cybercriminal's effectively call center. That's expensive. That's risky for the cybercriminals to operate. They had to do that because that was the only way to be able to convince people to transfer their money. One of the things that you may have seen is a seminal study from Stanford in just the last year that showed that when they analyzed a bunch of different applications of generative AI, the most promising application was in customer support.
Across the board, there was customer support productivity improvements that came by using generative AI tools, and those productivity enhancements were the greatest for the least experienced customer support folks. For cybercriminals, this is just an amazing opportunity, because while legitimate enterprise has to deal with hallucinations and errors and figuring out, what do we do about the shortcomings of generative AI in the context of customer support? For cybercriminals, those hallucinations are actually features instead of bugs, because they're trying to tell a gullible victim a believable story. Generative AI as it works today is out of the box usable by cybercriminals to be able to automate what they previously could not automate, which was essentially their last mile problem of how do we generate realistic audio, realistic video in order to be able to con people more effectively.
As a result of this, folks like Geoffrey Hinton decided that he needed to leave Google in order to spread the word about how dangerous these scams are as enabled by AI. Warren Buffett, when he was asked at his annual meeting about the greatest possible growth industry, he made a comment about how AI-enabled scams are probably going to be the greatest growth industry that he's ever seen, but unfortunately, he's not going to be able to invest in that. Another trend that we've seen is the democratization of AI. There's been this idea for a few years now that AI is extremely expensive to create, that you have to have billions of dollars, or if you believe some of the hyperscalers, hundreds of billions of dollars, or even trillions of dollars in order to be able to create more effective AI.
As soon as DeepSeek launched, we saw the contradiction of that, and we started to see that you could have very inexpensive systems, including models trained by cybercriminals, that can be extremely effective, in many cases, even more effective than the models that are produced in more expensive ways. The key thing to realize here is that you don't even need to have the greatest AI in the world in order to be able to fool people effectively. There are all kinds of examples of people being fooled at scale without that much sophistication in the technology that enables it.
What Do We Do?
What do we do about all of this? There's lots of advice out there, lots of fortune cookie wisdom. There are ideas like, you should create a secret password with your family. That's actually something that I think is a good exercise. It's good to have the conversation about what would you do if you received a phone call and it sounded exactly like a friend or a family member. Having a secret password is a way of being able to imagine yourself in that situation and try and navigate it if it ever does arise. The reality is, if you're in that situation, the fraudster knows exactly what buttons to push. If they simulate your loved one's voice saying, I can't remember the password, because they're in such a stressful situation, that's going to convince a lot of people. Fraudsters don't need to be successful 100% of the time.
As we were discussing with credential stuffing attacks, even a 2% success rate is enough for cybercriminals to be able to create multibillion-dollar businesses. There are a lot of different drawbacks that are associated with some of the standard practices that are out there. One of the most common practices is phishing training or security training. Again, something that you should do and that everyone does, but there's been a lot of academic research and anecdotal and quantitative information that we've seen that shows that phishing training doesn't actually prevent people from clicking on phishing links or engaging with social engineering attempts, especially if they're realistic enough, especially if they're contextually targeted based on generative AI that can customize the message that's sent out to everyone in your organization that they're trying to socially engineer.
There are a variety of different techniques that fall into this bucket of not working all that well. We talked about CAPTCHA. We talked about security training. Deepfake detection is another area that a lot of people have a lot of fascination about, but the problem with deepfake detection is twofold. Again, it's something that's helpful, but it's not a solution, for a couple of reasons. One, it's really difficult to be able to definitively say that something is generated by a malicious deepfake model versus something being generated by AI that is modifying an image or video or audio for other purposes. Every single time Apple launches a new phone, they talk about the multiple layers of AI that every image gets processed through, that every video gets processed through, and there are very few images and videos these days that don't go through some form of AI processing before they go online.
Google, when it launched Nano Banana, was effectively encouraging everyone to modify their photos using generative AI. We're seeing a great deal of generative AI content that isn't necessarily created for malicious purposes. Of course, the other problem that's associated with deepfake detection is the fact that you can't detect all of the deepfake models that cybercriminals might be using. If you analyze a particular image or video and it comes back with an assessment that says, there's a 50% chance that 40% of this content is AI generated, how exactly do you operationalize that? What are techniques that work more effectively? Multi-factor authentication is something which is very effective. Behavioral know-your-customer operations where you actually study the behavior of an account, of a device, of an individual, in order to be able to flag it for anomalies or patterns, that can be something that's highly effective.
One of the things that has been an interesting trend for me to observe in the security industry in the last 10 years is the rise of the idea of zero-trust security. The basic idea here is you shouldn't have an authentication gate where when the user passes that gate, they're now given full access and you can completely trust them to perform whatever actions they want. Because you could end up in a scenario like I was describing before, where a user's password is stolen and now that authentication step has actually been commandeered by a cybercriminal, and now you have to watch everything that happens post-authentication in order to discover if that account is being abused in some way that you didn't foresee. This was an epiphany for the security industry and they called it zero-trust.
One of the reasons that this was so interesting for me was because this has always been the mindset of the fraud industry. From a fraud perspective, you never trust any account or any device. You're constantly looking at the data that's associated with the behavior of those entities, and the good news is that that message is now spread throughout the entire industry. In fact, we're seeing a lot of collaboration between fraud teams and InfoSec teams in order to be able to take all of the data that might be available to a web application or a mobile application or an enterprise and figure out how to be able to use that to spot those patterns and anomalies, and identify where abuse may be occurring. A lot of these integrated efforts are called cyber fusion centers. I think that it's a fantastic trend to be able to see.
There are three areas overall in organizations from a cybersecurity perspective that are impacted by AI in different ways. When we think about infrastructure security, the primary impact that people are worried about is how AI enables cybercriminals to be able to discover and exploit vulnerabilities at scale. AI is fantastic at helping us solve completeness problems. Even though it's not doing any real thinking, what it is capable of doing is taking all of the data that's out there, including from Reddit and Wikipedia and so on, and telling us the things that we might not be thinking of in a particular context. If you use AI as a cybercriminal to be able to analyze a web application for vulnerabilities, it can give that cybercriminal a much more complete list of things to probe for than if you didn't have such a tool. That's the risk from an infrastructure perspective.
From a business model perspective and a trust and safety perspective, what we see is account abuse, and being able to automate user actions in a way that websites and mobile apps might not have anticipated. We're now seeing this tension playing out in that there are a number of AI-enabled browsers that have been launched from OpenAI and Perplexity and others. Gartner came out with a statement recently saying, you as an organization should block all AI-enabled browsers.
One of the reasons for doing that is the risk that's associated with automation writ large, being possible on your website and what cybercriminals can do with that. The third area, of course, is your channels of communication. Regardless of how much you lock down your business model and how much you lock down your infrastructure, you always have to keep those doors open in order to be able to communicate within your organization and to be able to communicate with the external world. That introduces the opportunity for cybercriminals to be able to use AI to socially engineer your employees, your customers, your executives, folks all throughout your supply chain in ways that you never would have conceived of in the past because it simply wasn't possible at scale.
One of the analogies that I use to be able to describe the scale that cybercriminals can operate at with automation technologies is, we don't have good intuition for how that works because we think in real-world terms. When we think about locking our houses, we often think, what can I do to make myself less of a target, and how can I secure my particular house? In the case of cybercriminals on the internet, they're not focusing on you in particular. There doesn't have to be anything special about you or your organization or your website. They can attack everyone simultaneously.
Imagine if a robber could break into every house in a community at the same time, if they could rob every bank in a city at the same time. That's what's possible with automation. There were previously blocks that prevented cybercriminals from performing that last mile where they wouldn't be able to branch to human and actually have someone speak to the victim on the phone realistically in the past. Now generative AI enables that. We need to think differently about protecting those communication channels against those types of attacks.
What we ultimately need is good AI to be able to fight against that bad AI. We need scale on our side to be able to deal with the scale that's coming in from the cybercriminal side. There's good news in that regard. There's another study from MIT from Professor Tom Malone where he was looking at all the different ways that AI can help different work processes within organizations. What he found was that humans and AI combined would outperform humans alone. Some people have called this co-intelligence. Some people have called this human augmentation. Regardless of what terminology ends up being the standard, I think we're going to see a great deal of products and services that combine humans and AI in order to be able to make better decisions, and especially to help humans make better decisions.
In fact, I think that this is one of the long-term trends that we're going to see in terms of how do we use generative AI throughout our organizations and even in our personal lives. A lot of people look at AI, conflate it with AGI, like I was mentioning before, and they think that AI is going to do our thinking for us. I think that we're going to rapidly realize that that's not the case. Generative AI can be a fantastic brainstorming partner. Generative AI can be a fantastic partner for bouncing ideas off of to help refine our thinking. This is an early research finding that illustrates the direction that I think we're moving towards, where humans and AI are going to be working together in a very conscious way where we're making intentional choices to bring AI into a conversation as opposed to simply outsourcing an entire work process or our thinking to it.
Conclusion (The Future is Already Here)
What I want to leave you with is this quote from William Gibson that I love that I think is illustrative of so many different technologies throughout society, but AI in particular. "The future is already here - it's just not evenly distributed". What this means for me in this context is that we already have examples of the most dangerous uses of AI that we could possibly conceive of. There are uses of AI that are even more dangerous than the things that I talked about in this presentation today.
The reason that society isn't completely freaking out about them is because they're not affecting everyone yet. We have the opportunity to identify where these risks start to emerge, and at the same time, look for uses of AI that are extremely beneficial both in the security industry as well as throughout all of our operations, and then extrapolate from that to figure out, how does this scale? How do we use this throughout an organization? How do we use this throughout all of society? What I always tell my teams is that we need to monitor all of the advancements in any new technology so that we can identify something that could potentially help us. We don't want to fall in love with the technology. We don't want to use it for its own sake.
There are a lot of organizations, as you saw in that slide with AI-enabled and AI-infused security, many organizations that are simply saying, we've added AI to our product and that's why you should buy our toothbrush. In reality, you shouldn't use something just because it's AI. You should use a product because it makes your life better in some capacity. I think that that's the trend that we're going to see overall. The only way to be able to improve your organization or your product quicker than your competitors is to be able to realize and recognize those opportunities before everyone else. That's really what the challenge is for all of us, and one of the cool things about being at a conference like QCon AI, because you get to see what the cutting edge looks like today.
See more presentations with transcripts