Data friction is arguably the biggest obstacle for every organization on its journey to achieve optimal business productivity. Even if you have not heard the term before, you’ve definitely faced its consequences in one form or another (most probably, even in multiple different forms). You need your product’s sales data for EMEA, but it’s not a top priority for your IT team? Your marketing team would love to derive better insights from your customer data, but your ERP system is not supported by their fancy cloud service? Your developers must wait several days to get an approved, anonymized copy of production data for testing?

Or maybe your bosses are urging you to migrate to the cloud, but you are too scared of GDPR? Every time your business urgently needs access to some data, but technical debt, other technological limitations or just compliance challenges prevent it from happening as expected, you are facing data friction again.

Data friction is the total of all delays and complications in delivering business data to its consumers within an enterprise. In the age of Digital Transformation when, for many companies, data is their lifeblood, too much friction is not just a productivity killer but the difference between success and failure as a business. Unfortunately, security and compliance are also often considered major sources of data friction, and ones that are too easy to fix: just relax your security policies a bit. Until a major data breach happens, that is.

There are, of course, multiple approaches towards reducing data friction. DataOps is an entirely new methodology for aligning your people, business processes, and IT to enable rapid, automated management of data. There are also numerous tools and technologies designed specifically to address certain aspects of data friction. For example, data virtualization solutions replace traditional ETL processes with real-time automated access to remote data sources, creating a layer of abstraction above multiple data siloes and enabling transparent data transformations to conform to security and compliance requirements.

However, whenever you add another layer of abstraction to an IT architecture, especially if its only purpose is to address the shortcomings of existing components, you’re not actually solving the underlying problem – to the contrary, the overall complexity is increasing, and the possibility of friction (or failure) is only increasing. Even if each component is the “best of breed” solution for a specific need, the whole is never “greater than the sum of its parts” – you’re lucky if it’s still just good enough. This idea applies to just about every field in IT, but it is especially relevant for the database market where highly specialized products optimized for a specific subset of data are still the norm.

This specialization leads to increasing data fragmentation between siloes, teams, and units within an enterprise. Needless to say, applying business analytics to this data is a challenge – you either have to be content with limited capabilities built into, say, a time-series database or have additional tools and processes to copy the data into a separate location for using “proper” BI solutions, which introduces additional friction and lots of room for error. But does it really have to be this way?

Well, an obvious alternative approach would be to try to keep all your data in a single universal database that’s capable of combining the traditional relational model with modern alternatives like graph or document models. This way, you’ll only need to manage a single technology stack with a single set of access and security policies. That is exactly what Oracle is offering with its “converged database”, and if you opt for their autonomous database service, you don’t even need to worry about managing that stack.

However, just placing all data into a single silo would be rather counterproductive if the only thing you can do with it is to store it securely. And again, Oracle’s long-term strategy is to build a multitude of data processing and analytics capabilities directly into its database. We have already covered Oracle APEX – a low-code application development platform that helps even business users with limited programming skills to create applications that can effortlessly grow up to be enterprise scale while keeping their data secured directly in the Oracle Database. But the company doesn’t stop there.

Last week, Oracle announced major new self-service capabilities in its Autonomous Data Warehouse, which bring the same level of data democratization to business analytics as well. Now business users can import their existing semi-structured data like Excel spreadsheets or XML and JSON files or bring in data from internal or external sources into a data warehouse themselves using a simple wizard-driven UI, without any knowledge about data structures or transformations or support from IT. The data is then immediately available for management and visualization using the APEX platform.

The product now offers rich analytics capabilities that also come with self-service tools for creating business models, data lineage and impact analysis, and automated insight discovery. Again, without any technical knowledge, a business user can quickly identify hidden patterns or anomalies in their data or connect their favorite analytics tool to dig deeper. On top of that, the Autonomous Data Warehouse now comes with built-in machine learning capabilities and includes AutoML – the technology that hides most of the complexity associated with ML from users, supporting them with automatic selection of algorithms and features.

Needless to say, all these capabilities do not require any additional data transformation or export to a different data store, avoiding IT-related time delays. Meanwhile, sensitive business data remains in a single location, secured against a broad range of external and internal threats and highly available. To me, this looks like a Zero Friction situation!