Customers have high expectations. In a world where there's a marketplace for everything, customers have become accustomed to a breadth of choice and customisation options. We’ve seen everything from travel, shopping, banking, and even government transformed to streamline the customer journey from landing on an app to getting what you want, how you want it.
Sellers in these marketplaces have had to react to make sure their products are easy to buy, easy to use, and easy to adapt. Those that couldn't keep pace have fallen away. In a world where a fractions-of-seconds impact sales, users have little patience for investing their own time into deciphering the quirks of your product.
The same transformation hasn’t happened yet for customers of data providers. But it's coming.
The current "best practices" in making data consumable don't go nearly far enough. Connecting data is still a heavily manual process, where the burden (and cost) falls largely to the consumer.
For many companies in the modern economy, the value they provide is via the data which they make available. Yet, their customers, (many who've paid hefty access fees), can’t realize any value until they’ve understood the way the data provider has chosen to represent the data.
For the most complex data domains, this requires a further investment of hundreds of thousands, or even millions of pounds before they can even leave the garage.
Why is it that data providers expect so much from their customers and customers simply accept it?
For the most part, the burden on data providers of providing a custom view of data for each of their customers quickly goes past the point of being cost effective. Even for the simplest of data sets, managing changes to many different versions when it needs to be updated would be a huge drag on the business.
There has been some progress.
GraphQL popularised the idea that data consumers should get a say in the data that’s returned to them. Instead of making a blunt request to a data service and having to accept whatever was returned, GraphQL provides a single entry-point that provides access to data from a multitude of services, (albeit with some integration heavy-lifting by the producers), and let consumers choose the subset of the data that was of interest to them.
However, that flexibility only goes so far. GraphQL consumers don't get to define the shape, names and structure of data that's relevant to them - they're still playing on the terms set by the data provider. This means there's still an integration phase for data, mapping received data to the target domain. That integration code is brittle, and has to change whenever producers decide to push a change.
Is that really the best we can do? Why should consumers have to take the data in such a prescribed format? Can't we go further?
Power to the data customer
At Vyne, we’ve taken this a step further - allowing customers to choose the shape of data they receive, from anywhere across your entire data catalog. Bespoke data interfaces for every data consumer, implemented and adapted automatically.
Rather than forcing customers to transform and stitch data received together themselves, data consumers should be able to simply define their own schemas that express what data they want and the form it should take.
By allowing customers to ask for the data in the shape that they need, there are huge benefits.
The burden of mapping data from the multitude of sources to the shape that’s meaningful to consumers is gone, and everyone remains protected from changes made by producer systems. The result is a lower up-front integration cost, and ongoing cost-of-change doesn’t flow downstream to all the consumers.
And while GraphQL has certainly blazed a trail in allowing consumers to cherry-pick their own data, it still forces everyone to consume data in exactly one shape and forces all the consumers to update when that schema changes.
That’s a great start, but we need to go further, in removing the concept of a single global schema that everyone must prescribe to. Global schemas suffer from a paradox of adoption - the more widely they're adopted, the greater the cost of change when it comes time to refactor. Inevitably, this leads to awkward trade-offs, and complex change co-ordination.
Through a consumer-centric approach, each consumer can define a different schema that's meaningful to them. The mapping between source systems is handled for them automatically, using Vyne.
We make this possible by adding a small amount of extra metadata to your API, expressed using Taxi, our OSS language for data. Along with supercharging the ability to query your API, this opens up a range of advanced data operations that can be automated using our platform, Vyne.
Clients can explore your data catalog which is updated in real time and construct their queries using the domain language and form that makes sense for them.
The world of data everywhere has given rise to a whole new economy of data products. As with any marketplace, the vendors that win will be the ones who can match customers expectations of personalization and ease of adoption.
The old "producer centric" mentality of data accessibility needs to evolve, and vendors need to start to provide data in the shape that suits their customers. The companies that can provide data to their clients easier and cheaper have a clear competitive advantage.
If you're interested in seeing this in action, and learning more about how it could help transform your data product experience, we'd love to hear from you.