I’m by no means an AI expert. Sure, I’ve been following the topic with much curiosity ever since reading an article about thinking machines back in 1990. Also, having a degree in mathematics sometimes helps to understand certain technicalities behind product labels. Still, I’m neither an AI developer nor a data scientist – I’m just an industry analyst whose primary job is to understand what new technologies and services appear on the market and to explain them to people who know even less than I do…
When it comes to the recent media coverage of artificial intelligence and its current applications, it’s simply amazing how much it is driven by hype. Yes, it would be unfair to ignore all the recent achievements in specialized computing necessary to power many of those applications, the sheer amounts of various data available for training machine learning models and so on, but the way these achievements are often presented creates unrealistic expectations in the general public (“AI will soon find the cure for the coronavirus!”) as well as deep uneasy feelings in many IT experts (“AI will soon drive us into unemployment”).
In the end, we have to spend considerable time dispelling numerous myths about artificial intelligence:
- No, AI is not at all a new technology: early developments even predate digital computers
- No, there is no universal AI algorithm or method to solve all our problems
- No, teaching AI to perform even a very cool trick does not automatically make it capable of doing any other trick imaginable
- And no, AI and machine learning are not the same thing!
The last one is perhaps my biggest pet peeve: so many people seem to believe that “Artificial Intelligence” is a single technology or an academic field that’s being developed by a close group of “AI academics” and that all potential applications of AI regardless of industry or purpose are the daily job of these notorious “AI experts”. Some even go as far as talking about enterprise AI strategies and the necessity for each business of having a chief AI officer.
Well, if you pardon me for a crude analogy, AI research is somewhat like oncology 😷. Just like there are actually over 100 types of cancer with different symptoms and methods of treatment, the broad field of Artificial Intelligence is very far from uniform. There are multiple ways of classifying various AI technologies, but I very much like the AI Knowledge Map developed by Francesco Corea. In his article, he identifies multiple AI paradigms that are characterized by profoundly different approaches towards understanding what intelligence is. For example, the symbolic approach reduces it to symbol manipulations, while the statistical approach relies on mathematical methods of solving specific problems.
Combined with five different AI problem domains (ranging from perception to reasoning and knowledge to planning and communication), this produces a clear map that helps to understand that various academic fields of AI research have very little overlap. In the same way, applications that rely on those developments also have very little overlap in the expertise needed to develop them, even though a single project can, of course, incorporate multiple AI technologies. For example, developing a knowledge-based expert system has nothing to do with computer vision (or with machine learning in general). Robotic process automation does utilize computer vision for screen scraping, but major improvements in image recognition methods will never translate directly into better RPA solutions.
Know Capabilities, not Buzzwords
Do you have to learn and understand each part of the AI landscape? Probably not unless you’ve been promoted to CAIO recently. However, even a basic grasp of the different underlying concepts can help you to make more rational decisions about purchasing new tools for your organization. Instead of talking in buzzwords and labels, you’ll be able to ask a potential supplier about specific capabilities and technologies behind them.
Even in such a relatively young and narrow field of AI applications as cybersecurity, the actual functionality you buy with an “AI-powered cybersecurity tool” may differ dramatically from vendor to vendor. If you want to avoid investing in snake oil, stop “buying AI” and start looking for concrete solutions for your security risks and challenges.