There’s no disputing the importance of data in driving cutting edge insights and business outcomes. However, for many organizations, the gap between raw data collection and data-driven results is wide and difficult to navigate.
Colin Mitchell, Immuta’s GM of Sales for EMEA and APJ, recently met with data leaders from a range of industries, including media, insurance, retail, and healthcare, to moderate a discussion about their biggest data priorities and challenges, both technical and cultural. Despite the vast differences in business operations and goals, these leaders faced similar obstacles related to data governance, access control, and regulatory compliance.
At the core of these issues is the question of how to efficiently apply flexible, scalable data policies in the modern data stack. This topic will be the focus of Immuta Founder and CEO Matt Carroll’s speaking session at the upcoming Gartner Data and Analytics Summit in London.
In this blog, we’ll take a closer look at data leaders’ top priorities and challenges, and how centralizing and automating data access control can help put organizations on the path toward successful data initiatives and innovations.
Data Leaders’ Top Enterprise Data Priorities
The data leadership roundtable discussion brought together titles including Head of Data Architecture, Director of Data Science, Head of Data Platform, and Chief Data Officer, to name a few. The participants represented organizations at varying stages of data maturity, from enterprises with established data operations and processes, to those in mid-development, and still others in the infancy or planning stages of data use and governance.
The leaders from organizations in the mid-development stage named data sharing across departments, regions, roles, and partners, and providing faster access to data, as their top two priorities. Those representing organizations with nascent cloud data strategies reported their top priority was investigating cloud data governance and control to drive business growth and scale.
What is preventing these data professionals from checking off these priorities? Here are their top four obstacles.
1. Data Localization Requirements
Any organization with international operations is familiar with geographically-based data use regulations, such as the EU’s GDPR. But the rapid growth in data localization laws – which directly or indirectly dictate how data can be stored or processed within a specific jurisdiction – makes data access management and compliance significantly more complex.
“Trying to make sure that we are fundamentally keeping European user data where it needs to be, that we’re not unintentionally moving data to the U.S., [and] we’re not making data available to countries that might have access to services in those countries, as a global organization is always a challenge,” said one of the leaders during the roundtable discussion. “Making sure that personal data is located in the right place is actually a huge difficulty because many of the larger providers will provide some assurances, but not assurances on your backups [or] failover.”
The proliferation of data localization and data residency requirements – which have more than doubled in the past five years – means organizations will inevitably need to implement access controls and dynamic data masking in order to segment data and ensure it remains within the appropriate jurisdiction. Flexible attribute-based access controls and purpose-based access controls can help achieve compliance while avoiding manual, unmanageable processes.
[Read More] The Five Data Localization Strategies for Building Data Architectures
2. Slow Data Access Approval Workflows
“When our data scientists [need] access to certain data sets, it takes a while to get them the right access rights,” said one of the roundtable participants. “This is honestly grinding the gears in many projects and it’s frustrating for the technical people who just want to get their hands on it.”
This challenge plagues data teams across all industries and sizes. According to research by S&P Global’s 451 Research and Immuta, 63% of data consumers reported that data is often outdated by the time they are able to use it due to lengthy approval workflows and data delivery. How valuable is your data if it’s stale by the time it’s accessible?
A key driver of this issue is the need to ensure and prove to legal or compliance teams that sensitive data is adequately protected prior to granting access approval. However, this puts the burden on data teams to translate regulatory legalese into data access policies, then gain approval from non-technical legal stakeholders that those policies are sufficient and compliant.
“You need to explain very technical content to very non-technical people,” said another data leader at the roundtable. “Bridging that gap is a huge issue and it takes a lot of time.”
When all is said and done, the data access approval process may take weeks or even months – which is simply too long when you’re trying to maintain a competitive edge. Plain language policy builders help avert this issue and make policies easy to understand for non-technical stakeholders, providing transparency that in turn accelerates approvals and access to data.
3. Data Silos and Decentralized Data Management
In the early stages of cloud data platform adoption, when a handful of users need access to a limited number of data sources, it’s natural for team-specific processes to form. For instance, data scientists might have a preferred approach and suite of tools that is entirely independent from that of data operations or BI teams. When data use and resources are limited, this structure may make sense. But as organizations grow and scale data use, these disparate approaches inevitably create data silos and can stifle productivity.
“We find that people keep doing the same job because the data scientists want to do it in Python, the operations teams want to do it in Informatica, and the BI team wants to do it in Azure Data Factory, so there’s duplication of data, technology, [and] tasks,” said one of the roundtable participants. “Ideally, you’d process the data once and all three teams would access it universally.”
Without a centralized data management strategy and single source of truth for access permissions, data teams struggle to achieve consistent data policy enforcement or understand what data is being accessed by whom. This further exacerbates the challenge of regulatory compliance.
“We’re having to migrate data into more central stores so that we can access that data across functions. Working out who’s got the same data, whose is most recent, should I be holding [it] – there are so many questions,” said one of the data leaders. “On top of that, we’ve got the basic question of ‘can we share it from this company to that company?’”
Separating policy from platform enables data teams to centralize policy enforcement in a data access platform like Immuta, and apply access controls across any technology in their data stack. This helps break down data silos and provide much-needed transparency to prove compliance, while still allowing data science, operations, and BI teams to use their preferred tools.
[Read More] How to Integrate Identity and Access Management Across Cloud Platforms
4. Cloud Data Security Hesitation and Skepticism
Although 94% of organizations now use at least one cloud data platform, hesitation about cloud data security persists. One data leader who works in financial services and insurance mentioned that the industry is slow to fully embrace cloud platforms due to the volume of sensitive data, including personally identifiable information (PII), that is regularly involved.
In some instances, highly regulated data-driven organizations are well-equipped to manage sensitive data use and balance regulatory requirements. But other companies, especially those in a period of hypergrowth, may not have the necessary resources to confidently stand up such processes. One roundtable participant put it bluntly: “We’d rather miss an opportunity than make a mistake.”
This mindset, particularly when organizations take a DIY approach to cloud-based access control and governance, can have a ripple effect in terms of cloud data platform ROI, organizational buy-in, and data-driven business decision making and outcomes. In fact, research by IDC showed that 50% of cloud migration projects are not considered successful due to challenges with data governance, access controls, and the ability to assure leadership and compliance teams that the right controls are in place to keep data secure.
There is reason to be optimistic about the power of putting the right resources in place to overcome cloud data security challenges. The same IDC study found a correlation between organizations that are mature in enterprise intelligence and improvements in revenue, indicating that the long term benefits of investing in cloud data platforms and establishing scalable processes to enable them can pay dividends.
Best Practices for Scalable Enterprise Data Security
Among the priorities the data leaders cited for the upcoming year, putting foundational systems and processes in place to scale secure data use was the most common. But how best to go about this? At a high level:
- Take inventory of existing data technologies, users, processes, and regulatory requirements to understand your organization’s data landscape
- Create, vet, and implement a data strategy that takes the current environment and future goals into account
- Prioritize data security and access control when building the data strategy, and incorporate a plan or platform that is able to scale with the business
A data access platform like Immuta removes the manual burden and guesswork from data teams by discovering, securing, and monitoring data use throughout the enterprise. To experience how easy policy creation and implementation is with Immuta, try our walkthrough demo.