Google has launched Vertex AI, its fully managed artificial intelligence (AI) and machine learning (ML) platform for the enterprise. It tackles the challenge of operationalizing AI/ML projects with automated pipelines, accessibility for business users, data scientists, and ML engineers, and pre-trained APIs for speech, vision, video, and more. The divide between technical know-how and precise customization and implementation of ML models is shrinking rapidly, thanks to AI Service Clouds.

Fully Managed will be a Distinguishing Feature

Fully managed will become a distinguishing feature of AI Service Clouds. Although attention is primarily given to the AI capabilities of AI Service Clouds, such as plug-and-play language, cognitive, and prediction capabilities, the implementation of these is simplified with managed services. For a wide variety of personas to collaborate on an AI project, a few things have to be in place: a user interface that abstracts away complexity from reliable AutoML that assists in data preparation, model selection, feature engineering, testing, validating, and beyond. The computational effort to power AutoML for model development, let alone implementation, is accessed most realistically from the cloud. The requirements are many: data storage of training and operational data, flexible training requirements like distributed, spot, or batch, model versioning, and much more. AI Service Clouds that are managed services abstract away the complexity of running AI/ML projects and hand off managing the computation-intensive tasks.

AI Service Clouds is an emerging market space, with much emphasis on AutoML, automated pipelines, pre-build models accessible via API, and making AI/ML accessible to business users without formal training in the field. But the computational effort that is required to make MLOps appear so effortless remains a fundamental part of this new wave of enterprise AI.

Our Recommendations

Be aware of your data: Your data has gravity, and whether intentional or not, the place where your data resides has an influence on the vendors and services you use. Especially in the AI/ML field which is so data-reliant, be strategic about planning where and how data is stored so that it can serve your use cases in the best way possible. Security and privacy controls of your ML data should be considered on-prem and especially in the cloud. Google Cloud Vertex AI does provide additional security measures for sensitive data, such as encryption and dataset aggregation and obfuscation.

Be aware of your use cases: It’s okay to dream big when it comes to AI/ML, but don’t let the dream steer strategy. Always consider what the business needs are, then consider which technologies or revised processes could serve it. Don’t pursue AI for its own sake.

Be aware of who your AI/ML project impacts: If there is any time to implement multi-stakeholder practices, it is with your AI/ML projects. You will need a diverse development team to ask the right questions: How was the data that we use for training collected? Is it representative of your customers, or does it identify or alienate any group? Does the data adequately meet the needs of the business case? What communication strategies with end users are in place?

By automating and abstracting away complexity, AI Service Clouds make diverse teams more feasible, offer ready-made MLOps tools to the enterprise, and bring flexibility to the computation question. Read the Market Compass: AI Service Clouds to learn more.