Artificial Intelligence is an incredibly powerful tool for detecting and ultimately preventing cybercrime. But as a security leader, it’s easy to get lost in overly technical explanations of Artificial Intelligence that don’t capture or explain its true potential for threat intelligence.

In this article, we will explore what the term “Artificial Intelligence” means as a first step in learning to differentiate between the real potential of this deep technology and the false promises of marketing hype.

First Things First

The term was coined in 1955 by John McCarthy, professor emeritus of computer science at Stanford University, “on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” That first definition holds the key to understanding the core of artificial intelligence, or AI. AI refers to machine processes that appear to mimic human cognition. The voice control on your smart phone, for example, mimics the human process of translating sound waves into meaningful, actionable information.

So why did Professor McCarthy choose the words “artificial intelligence” to describe this phenomenon? It’s helpful to consider both halves of the phrase.

We understand “artificial” to refer to a replica of something naturally occurring: artificial sweetener instead of cane sugar, for example. More difficult to define is “intelligence.” There are as many conceptions of intelligence as there are fields of study. Psychology, computer science, and mathematics all conceptualize human intelligence differently. Consider: emotional, logical, interpersonal, linguistic. You’ve probably worked with someone who exhibited advanced logical reasoning skills, but low emotional intelligence. A Nobel Prize-winning physicist may struggle to offer a compelling literary analysis on the poetry of John Dunne. So who’s smarter – Einstein or Shakespeare? It’s a question without an objective answer. Human intelligence isn’t a single thing, and neither is artificial intelligence.

More Than the Sum of its Parts

The field of AI has been evolving for nearly seventy years, and our understanding of AI’s capabilities evolves along with it.

We’ve already reached the point where AI systems reign superior to human performance at games of strategy like chess and Go. These games require complex cognitive processes that AI is particularly well suited for: there is an objective goal that computers can work toward much faster than the human brain can, using pre-programmed algorithms to achieve desired outcomes. And while we’re not yet at the point where autonomous vehicles dominate roadways (90% accuracy sounds pretty good until you ask a parent to let a self-driving car pick up their child from soccer practice), AI is poised to surpass human cognition in ever-increasingly sophisticated functions.

However, in the technology industry, AI has become a marketing buzzword useful for attracting investment opportunities and not much else. To get a grasp on what is and isn’t true AI, it’s helpful to examine the types of AI and their characteristics.

Three Types of AI

AI can be divided into three categories: narrow, general, and super. Narrow artificial intelligence excels at performing a single task, such as winning a game of chess in the example above. It handles routine jobs with ease, but only in a limited context, and only in a single application. For example, the AI that places a phone call for you can’t translate your son’s Spanish homework. That’s where the “narrow” of narrow AI comes from—its narrow scope. Other examples of narrow AI include weather forecasts, speech and image recognition, and the “Recommended For You” section of your Amazon homepage.

As sophisticated as many AI applications are, none have advanced beyond the realm of narrow AI. The field is moving closer to general AI, but hasn’t achieved it yet. General AI refers to artificial intelligence that interprets and reacts to its environment just as a human would. This is the realm of science fiction fantasy, where machines can think abstractly, innovate, and plan. As Ben Dickson writes as part of his “Demystifying AI” series, “General AI has always been elusive. We’ve been saying for decades that it’s just around the corner.” It’s anyone’s guess when—or if—the promises of general AI will become reality.

Artificial Super Intelligence refers to the potential future state when artificial intelligence surpasses human intelligence in every application. Think The Terminator, pop culture’s favorite example of a superhuman machine. At this stage, superintelligence is purely hypothetical, although its implications already consume the minds of philosophers and science ethicists – as well as pulp fiction enthusiasts!

AI in Cybersecurity Applications

Despite our current limitations to narrow AI, the technology is fundamentally transforming the way the world does business. A report from The Economist found that 75% of over 200 business leaders from across the globe plan to implement AI in their businesses in the next three years. In an article for Gartner, Kasey Panetta writes, “Any industry with very large amounts of data — so much that humans can’t possibly analyze or understand it on their own — can utilize AI.” Think of the vast possibilities in healthcare, retail, transportation, education. The list is almost endless. And the use cases in the security industry are particularly compelling.

One of the biggest challenges facing modern security teams is the overwhelming amount of information they’re faced with every day. The average enterprise is presented with 10,000 or more security alerts a month, and on average it takes a security analyst ten to fifteen minutes to properly review a single alert. Triaging alerts is an excellent example of narrow AI: while a human can perform the task, it becomes much more efficient for a machine to do so.

Looking Ahead

Now that we’ve built a solid understanding of the fundamentals of AI, it’s time to explore the nuances of artificial intelligence and machine learning. Together, they’re shaping the future of cybercrime prevention.