The Story of Edge AI

Whether you are a firm believer in the bright future of Artificial Intelligence or somewhat of a skeptic like me – you simply cannot ignore the great strides AI technologies have made in recent years. Intelligent products powered by machine learning are everywhere: from chatbots to autonomous vehicles, from predicting consumer behavior and detecting financial fraud to cancer diagnostics and crop harvesting. There is however a major factor limiting even further applications of AI and ML in almost any industry: AI algorithms are very computationally intensive and until quite recently, prohibitively expensive to operate “on-premises”.

The rise of cloud computing has given a major boost to AI technologies in recent years, and to this day, it is the cloud that powers the majority of AI applications, both for consumers and for enterprise applications. However, what works perfectly well for business analytics, language translation services or, say, virtual assistants, might be not enough for more mission-critical scenarios.

The Challenges of AI in the Cloud

The biggest challenge of cloud-based AI are potential connectivity issues: if a device is only occasionally connected to the Internet, it won’t be able to utilize cloud-based machine learning services efficiently. For real-time scenarios like self-driving cars, even the additional milliseconds of network latency can be a deal-breaker. Another major problem of cloud-based AI is the sensitivity of the information that has to be sent to the cloud for processing. Companies operating under strict regulations (which nowadays includes anyone dealing with personally identifiable information) simply cannot afford the risks for a security breach or a compliance violation fine.

These problems are nothing new, of course – they’ve been the biggest obstacle to adopting any cloud services for over a decade already. To address this challenge and to expand their customer base to these “not cloud-ready” companies, cloud service providers have been pushing the concept of Edge Computing. At its face value, edge computing simply means bringing computing and storage resources closer to the customer’s geographical location to reduce network latency. This has started over 20 years ago with CDNs (content delivery networks) but expanded to mobile phones, IoT devices and network gateways.

Nowadays, these devices are powerful enough to implement a substantial part of the cloud service stack locally, creating a distributed computing platform between corporate networks and clouds, dramatically improving response rates and reducing the strain on the core cloud infrastructure. It is important to understand that this trend does not by any means undo the achievements of the cloud model. Edge computing isn’t returning to on-prem data centers; it is simply another phase in the evolution of the cloud. Even the name itself implies it: the edge, of course, refers to the cloud, its proverbial “silver lining” …

In a sense, edge computing is an alternative approach towards hybrid architectures, one that tries to blur the massive divide in on-prem and cloud technology stacks and to expand the reach of cloud service providers even deeper into their customers’ networks. Bringing AI capabilities closer to their consumers is just another logical step in that direction.

The Future of the Intelligent Edge

It is well known that preparing data for machine learning and training of ML models is the part of AI that requires the most computing resources. The actual inference (i.e., putting a model to productive use) is much lighter. Optimizing ML models for running on edge devices is one of the primary methods of decoupling them from the cloud. Solutions like AI-powered antiviruses that only require updates every six months are already a reality. Simple image recognition can run directly on your mobile phone, helping you take better pictures of your food.

More sophisticated models require specialized hardware to run. Major semiconductor manufacturers like Intel and Nvidia and even cloud service providers themselves like AWS have specialized AI chips in their portfolios; smaller companies are offering solutions for developers to simplify and optimize the usage of this hardware. In other words, a whole ecosystem is being born, bringing AI/ML closer to customers and enabling usage scenarios previously impossible.

The field is still far from mature, however. Perhaps the best indication of this is a lack of standardization developments and a very fragmented market, where very few vendors can deliver a full technology and service stack to satisfy different use cases. Will we see the emergence of a standardized abstracted architecture similar to VMware or Kubernetes for traditional computing? Perhaps: companies like Run:AI are already working on AI orchestration platforms.

One thing is clear though: cloud AI isn’t going anywhere. For the foreseeable future, we’re going to deal with a multitude of hybrid architectures. History has a tendency to repeat itself, after all.

Related Events 2020

Virtual Event 2020

Related Videos

KuppingerCole Analyst Chat: Applying AI Governance


KuppingerCole Analyst Chat: Applying AI Governance

In a follow-up to an earlier episode, Matthias Reinwarth and Anne Bailey discuss practical approaches and recommendations for applying AI governance in your organization.

KuppingerCole Select

Register now for KuppingerCole Select and get your free 30-day access to a great selection of KuppingerCole research materials and to live trainings.

Stay Connected

Subscribe to our Podcasts

KuppingerCole Podcasts - watch or listen anywhere

How can we help you

Send an inquiry

Call Us +49 211 2370770

Mo – Fr 8:00 – 17:00