451 Research Reveals Data Supply Chain Gaps
Learn how to optimize your organization’s data supply chain to gain a competitive advantage.
Increasing demand for data and its evolving uses – particularly personal and sensitive data – are turning traditional data supply chains upside down. Today’s environment necessitates real-time data access to drive decision-making, which in turn requires a continuous, efficient flow of data throughout organizations. It’s clear why manual processes and structured data cannot support this new, fast-paced era of data use.
A survey by S&P Global’s 451 Research and Immuta aimed to identify exactly how the growing importance of and need for real-time data analytics is impacting the data supply chain. The results uncovered three primary challenges that, as the number of data sources and consumers increase, will almost certainly be exacerbated if left unchecked – inhibiting data’s potential and stunting organizations’ ability to compete.
Here, we explore those challenges and the downstream impacts they may have on data teams, practices, and results.
S&P Global and Immuta surveyed 525 data practitioners from North America and Europe, and found three clear challenges that introduce friction into the data supply chain:
As data – especially sensitive data – becomes increasingly common and necessary to compete, regulations on its use are proliferating from federal and state governments, as well as industry organizations and more. This is on top of data use contracts and data sharing agreements that dictate how data is to be used, and more importantly, protected.
The growth of data regulations and compliance mandates is not lost on data teams. According to the survey, 84% of respondents are subject to at least one data use regulation, such as GDPR or HIPAA, and the same percentage believes data privacy and security requirements will limit access to data within the next two years.
But it’s not just the number of regulations that is rising – it’s also their scope. The research shows that 86% of respondents also agree that “security and privacy rules have become stricter over time, making it harder to access and use data.” More stringent regulatory compliance standards require potentially new data protection methods, in addition to the ability to verify that those methods work as intended. The complexity involved with ensuring compliance can become a data supply chain bottleneck that delays or prevents coveted real-time data access.
While security, compliance, and governance requirements are growing, the same cannot be said for the people responsible for implementing measures to satisfy them.
As data analytics has become a core function in most organizations, the number of data “consumers” – data scientists, analysts, etc. – has increased; the number of data “suppliers” – data engineers, architects, etc. – has not. The survey shows that 38% of data suppliers report lack of personnel or skills as their biggest pain point, indicating that data supplier roles have not been prioritized to the same extent as data consumers.
In light of increasing sensitive data use and regulatory requirements, the need for data suppliers is magnified – without them, data access control methods may be inefficient, inadequate, or both. The implications, including delayed data access and unauthorized data use, are easy to understand and could cause substantial harm to the liable organizations and individuals.
Not only are data suppliers in short supply, but the tools they have to do their jobs often cannot keep up with the pace and demands of an agile DataOps environment. Although automation is broadly used to increase efficiency, 29% of data suppliers report lack of available automation is a major pain point that hinders their ability to efficiently complete tasks. Data consumers feel the effects of this as well, with 37% citing lack of automation as a source of technology-based bottlenecks.
It’s worth noting that automation in and of itself is not always a silver bullet. While automating otherwise manual processes will certainly accelerate those processes, not all automated tools are created equal. In an independent study conducted by GigaOm, researchers found that Ranger’s role-based access control with object tagging (OT-RBAC), though automated, required 75x more policy changes than Immuta’s attribute-based access control (ABAC) approach.
Clearly, a lack of automation is the worst case scenario for data supply chains – but automation that requires substantial human intervention and does not efficiently scale across platforms or data sources is nearly as detrimental.
Learn how to optimize your organization’s data supply chain to gain a competitive advantage.
The challenges to data supply chains do not exist in a vacuum – their downstream impacts can affect organizations’ data quality, investments, revenue, and even team dynamics.
It’s no surprise that movement to the cloud has accelerated as data demand and use grow – S&P Global and Immuta’s survey found that 76% of data teams expect “to use cloud data technology more frequently for storage, compute, and sharing over the next 24 months.” Earlier research by Immuta also showed that data engineering and operations teams are frequently adopting more than one cloud data platform. It’s no wonder why, considering the cloud’s time and cost savings, and the flexibility it provides to choose best-in-class technologies while avoiding vendor lock-in.
In spite of this, data supply chain challenges are causing “cloud-conservative” organizations to second-guess or delay cloud adoption. The top three barriers to cloud adoption for this group are:
These barriers fall squarely in line with the top data supply chain challenges: security, compliance, and governance requirements and restrictions. As regulations become both more common and more stringent, organizations that were already slow to adopt the cloud find it easier to remain with legacy on-premises solutions and perimeter security frameworks – cloud adoption would require rethinking and rearchitecting a data security strategy. Without the proper resources to navigate data security and compliance, these organizations will be less equipped to make sound data-driven decisions and fall further behind cloud-forward competitors.
Self-service data access is the equivalent to buying straight from a producer, with no middleman – you’re able to receive a good or service faster, with less complication and coordination. The same is true for data teams – self-service data access allows data consumers to access data faster and reduces the burden on data suppliers to provision each data access request.
Yet, according to the S&P Global and Immuta report, just 48% of respondents report that their organization provides self-service data access and use. This number rises to 85% for those that rely on data for strategic decision-making, laying bare just how wide the gap is between cloud-forward and cloud-conservative organizations. For the latter group, inability to grant self-service access to data is or may become a major barrier in achieving a consistent and continuous flow of data throughout the enterprise, which in turn limits data’s value.
One consequence of the inability to provision self-service data access is that by the time data is consumption-ready, it is often stale or outdated. According to the survey data, 55% of respondents say data is out-of-date by the time it is used. When looking just at data consumers, that number jumps to 63%, suggesting that data consumers feel the effects of stale data more strongly than data suppliers. After all, how valuable is data if it’s potentially no longer relevant?
It’s no surprise that point-in-time data, as opposed to real-time data, is the top challenge data consumers face in their roles. Almost 40% of respondents in this group said this static, stale data precludes them accessing useful data sets and developing timely insights.
Whether you’re sitting in traffic or waiting to access data, bottlenecks waste time and breed frustration. Within the data supply chain, this frustration is not just conjecture – S&P Global’s and Immuta’s survey provides the data to back it up.
Data suppliers, stretched thin and lacking adequate resources, overwhelmingly report feeling frustration from their data consumer counterparts – 62% say data consumers express frustration in attempting to access and use data. However, just 24% of data consumers agree with that perception.
This indicates a mismatch in perspectives – data consumers may not realize they come across as frustrated when interacting with data suppliers, while data suppliers may perceive frustration more acutely because they are already overextended and under-resourced. It goes without saying that this has the potential to stifle collaboration and further impede efficiency and results.
Data practitioners are nothing if not resourceful, and if the tools are available to help do their jobs, they will learn and leverage them. In certain scenarios, however, this can come at the expense of data privacy and security.
One such scenario is when data consumers, frustrated and hindered by delayed data access, turn to free or freemium tools. Without the constraints of formal contracts and approval processes, 63% of data consumers report utilizing these tools when they’re unable to get what they need from data suppliers. Unfortunately, lack of formal product vetting and approval also introduces risks to data security. Leveraging resources that fall outside the scope of a holistic data access control framework increases the likelihood of unmanaged and non-auditable data use, and in turn the threat of noncompliance and associated penalties.
Security, compliance, and governance requirements and restrictions, lack of skills and personnel, and lack of available automation are the top data supply chain challenges, but they do not exist in a vacuum. The ripple effects can create a vicious cycle of ineffective data use and broken pipelines that impacts not just business functions and results, but also team productivity and collaboration.
The easiest way to begin tackling these challenges head-on is to adopt an automated data access control solution, like Immuta. Immuta’s policy-as-code approach and dynamic, attribute-based access controls simplify policy creation and implementation across cloud data platforms, reducing the burden on data suppliers to enforce access controls consistently and prove regulatory compliance. With Immuta’s universal cloud data access control, data teams have bypassed common supply chain challenges and increased permitted use cases by 400%, grown data engineering productivity by 40%, decreased roles by 100 times, and reduced speed to data access from months to mere seconds.
Read S&P Global’s and Immuta’s full report on gaps in the data supply chain and best practices for avoiding them here. To find out how you can grow your business while protecting your data, request a demo.
Innovate faster in every area of your business with workflow-driven solutions for data access governance and data marketplaces.