<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=306561&amp;fmt=gif">

 

 

Interview with Dominic Sartorio, Senior Vice President for Products & Development, Protegrity

Written by Corinium on Apr 25, 2019 4:10:17 PM

CDAO Africa Insights Data and Analytics

DominicAhead of the Chief Data Analytics Officers & Influencers, Insurance event we caught up with Dominic Sartorio, Senior Vice President for Products & Development, Protegrity to discuss how the industry is evolving.

Can you tell me a bit more about your role at Protegrity?

I am head of Products here, which comprises of R&D, Product Management and Global Customer support. Most of my days focus on understanding what’s happening in the market, defining overall product strategy and direction, and translating into execution across the various teams.

Are you seeing any specific issues around the insurance industry at the moment that should concern CDAOs? What do you recommend to organisations to help them overcome these?

Yes, definitely! The last 10+ years or so have seen Insurance become as data-driven as any vertical industry. For example, P&C insurance strives to understand its customers and households better through data, to provide better customer service and anticipate insurance needs, as well as accurately measure risks. Life insurance needs accurate data on consumer health, age and other metrics of risk. And it’s become a hyper-competitive business, so enhancing customer service through data is critical for maintaining customer loyalty.

And more recently, we have also seen innovation with IOT (Internet Of Things). For example auto insurance companies offering to capture real-time driving statistics from policy-holders’ cars to encourage and reward safe driving. So as time goes by, we see more and more data innovation, requiring access to more data by more people in your organization. But this creates risks – More opportunities for misuse or breaches, with any given incident being potentially more damaging to brand and customer trust given the volumes and nature of data involved. It’s important to have a competency in place for understanding how to mitigate those risks without getting in the way of this innovation.

Why should Chief Data & Analytics Officers care about data security?

Most enterprises in the 21st century regard data as an incredibly valuable asset – Insurance is no exception - to know your customers better, know your market better, operate more efficiently and other business benefits. All assets need to be optimally leveraged for maximum business value while also being protected from misuse, whether there was malicious intent or not, and this needs to be the responsibility of whomever is responsible for that asset in the company. Just like CFOs make sure of this for financial resources, just like HR leaders make sure of this for human resources, so it is with the CDO/CAO and data. You want to enable generating value with data, but also do it safely.

What is the most common mistake people make around data?

Must I pick just one? 

OK, the biggest mistake I’ve seen is simply not being on top of where your sensitive data is, or just being blissfully unaware that a given customer dataset, for example, is sensitive for specific reasons (regulatory or otherwise) and needs special handling. This can happen for whatever reason, it’s nobody’s job to be responsible for this, or it just isn’t an organizational priority. This lack of awareness then leads to such data being broadly distributed, inevitably more broadly than anybody realizes. It gets shared with anybody in the organization who asks for it (whether they need it or not) or even sharing with customers/partners without really thinking about it. Then when there is a breach, it comes as a shock, “wow, I didn’t even know that application had access to so much sensitive data”. For example, a single source of truth like your customer master might have had some basic access controls in place, but one of its administrators agreed take a snapshot of that data and share with a marketing analyst team (for example), and it’s their BI tool that got breached.

Step One in any data security program should first be to discover and classify datasets that are sensitive, and know where that data is, and understand who really needs it to do their jobs. This is Governance 101, and there exist tools (including Insight Discovery from Protegrity) that can help with this. Then you can be more confident you’re protecting that data in the right way, in the right places, via protection policies that keep it safe without getting in the way of the business getting value from that data.

The 2nd biggest mistake, I’ll call it “silo mentality”. I’ve found many IT as well as Business leaders have a mental model of data in that it is simply part of, or belongs to, a specific database or application, and thus they falsely conclude that just procuring a tool to protect that given environment will sufficiently protect that data. False. In data-driven organizations, data is flowing. It is being aggregated from various transactional systems into data masters or data lakes, being analysed, being distributed to downstream users or even 3rd-parties, reported on, exported to Excel, attached to emails, you name it, data is being shared across silos. And then there is the Cloud. A silo mentality leads to what I call “whack a mole” data protection strategies, tactically reacting to wherever that sensitive data appears next, and you end up being only as secure as your weakest link. Like the customer master example I mentioned earlier, maybe somebody thought “hey I need to protect my customer master, as its administrator I will put access control in place” but then didn’t think of how that data might be shared downstream, or didn’t think it was their job to worry about that. Then when the BI tool is breached, the organization’s first reaction might be “how could that happen, our customer master was protected” and then it’s a shock to realize that data was breached via a less-secure downstream environment. Far better is to apply enterprise governance principles: Define policies on the data itself, apply those policies consistently wherever that data is being used. The protection policy follows the data. This is a much more proactive and scalable model. Not coincidentally – Protegrity’s platform is built from the ground up to enable this style of data security governance.

How fast are the advances you’re seeing in AI at the moment? What do you recommend to organizations to harness this but also show a solid ROI?

The areas of fastest AI innovation and adoption are around machine learning, using it for more and more use cases where there exists large volumes of data, and human beings just don’t have the bandwidth or can’t keep up with ongoing stream of transactions, events, or whatever other changes in the environment being described by that data. Machine learning can keep up, by continually looking for trends and anomalies, or predictive analytics, that are interesting for the given use case. For example, insurance companies have massive amounts of customer data, and if it is augmented by tracking social interactions with a customer, and also augmented any IOT such as the automotive example I mentioned earlier, that’s a huge and constantly changing real-time stream of data regarding that customer. And then the average insurance company can have millions of customers. There are many needles in that haystack, many opportunities to proactively serve and delight a customer, or conversely flag risks that you’d love to be aware of. Humans can’t keep up. ML can keep up.

Now, there is a data risk here. ML becomes like any other user of data, albeit a very active and high-volume user, and that usage needs to be kept safe. Is the ML algo using that data for purposes that comply with regulations? And what about the output of that ML, for example, if it kicks out predictions or alerts regarding a consumer’s changes in behaviour, or indications of risk that could adversely affect them, e.g. life-changing events like moving to a new home, buying a new car, marriage/children, medical conditions, this may be data that consumer considers private information, you wouldn’t want that misused or falling into the wrong hands. So, data security policies also need to consider such use cases.

By the way Protegrity is also innovating in this area. Our protectors – those components that protect and unprotect sensitive data as prescribed by data protection policies – are always logging their activities. What data elements were accessed, and by whom. We’re building ML algos for continually monitoring for unusual behaviors (maybe indicative of an account being hijacked), as well as ongoing data usage trends that may suggest a change in security policy (such as a given dataset being used more and more by ever more stakeholders in the business).

What are the relative advantages of tokenization vs encryption?

It definitely depends on the type of data, no one method is always better than the other. But I’ll give an example in favour of each. For a large volume of structured data, for example, a customer master or data warehouse, where there are many stakeholders in your organization who need to see different subsets, tokenization is generally better. You can protect individual fields, or even subsets of fields (e.g. segments of a credit card number) and establish data security policies where your risk management people can see only those fields that inform risk decisions, your customer service people see some PII data to identify and interact more effectively with customers, and so forth. Also tokenization can do this very quickly, so it typically won’t add unacceptable overhead to a high-volume transactional or analytical system.

Meanwhile encryption is good for an unstructured dataset, for example MS-Office documents, medical images, or so forth, where access is binary – either a given user can see that document, or they can’t – and encrypting/decrypting the whole thing at once is fast and minimally invasive.

Ideally the decision of how to protect data should be treated like any other data governance policy. Once you have classified and discovered your sensitive data, then you decide how to protect it. This protection policy would include considering the protection method used, balancing transparency, performance, and of course strength of the security.

What three pieces of advice would you give to companies to looking to migrate to the Cloud?

My first advice is, simply, don’t be afraid of the Cloud. The days of organizations saying “I would never move sensitive data to the Cloud” are long gone. In reality, this decision, like any decision regarding data usage, is based on balancing business risk and reward. Cloud platforms like Amazon, Google, Azure have matured tremendously over the last few years, promising enterprise-class functionality at ever lower-cost, and promising much greater speed and agility than is often possible with on-premise infrastructure. That’s the reward. Meanwhile risks have gone down, Cloud performance and availability keeps getting better, and of course the state-of-the-art in data security is getting better. Risk & reward are tipping in “reward’s” favour. In anything, NOT moving to the cloud is its own risk – Your competition is probably doing it and enjoying the agility and cost-savings benefits of doing so, freeing up time and capital to compete more effectively with you.

Second, recognize that all Cloud vendors have a security responsibility matrix: They are responsible for keeping the Cloud infrastructure secure – their own data centers are as secure as Fort Knox – but your data and your applications are your responsibility. So you have to consider how is your data protected, and who has access, in the Cloud just like you do with on-premise environments.

Third, treat the Cloud as an extension of your enterprise data landscape… It isn’t its own “silo” where data governance and security decisions are being made in isolation. You may be moving much workload to the Cloud, but most organizations will never be 100% Cloud, they will be hybrid. Just like mainframes didn’t die, traditional client-server didn’t die, on-premise computing won’t completely disappear overnight. Plus most organizations are multi-cloud, either to hedge their bets or maybe Cloud adoption is being done on a departmental basis. Regardless, your data is probably moving across these disparate environments, but the data is the data, it has same value to the business, and causes just as much damage regardless of where it is breached. You want data-focused policies that follow the data. Governance 101.

Do you think that a GDPR-style regulation might be coming to the USA? What do you see as the future of data privacy in the US?

Yes, definitely, the only questions in my mind are when and how it comes about. We are already seeing it at the state level, such as California Consumer Privacy Act (CCPA) which is in effect this year, and there also exists insurance and financial services data privacy regulations which has already been in effect in New York and other states for several years. You are also starting to hear more dialog at the federal level, in response to ever greater breaches and a growing sense of distrust of how large companies including the social media companies are handling users’ data, some of which came out in the last presidential election. And more and more consumers are becoming aware of how their data can be misused to their detriment, leading to demands for government action in favour of protection. Whether eventual legislation will exactly mirror GDPR remains to be seen, I think there will be some experimentation at the State level as well as for specific verticals whose successes would point the way. And timing remains to be seen, like anything at the Federal level, one or both political parties need to make it a priority on their legislative agendas, but over time their constituencies will only demand it more, not less.

What are you most looking forward to about CDAOI Insurance 2019?

This is an exciting vertical with a lot of recent innovation w.r.t. data. It will be exciting to see what new use cases emerge, how are Insurance companies harnessing the power of data to serve customers better while also better managing risks, and hear from smart and innovative people in the CDAO ranks who are thinking about these opportunities. I’m a Data guy, this stuff always excites me.

Is there anything else Chief Data & Analytics Officers need to know about Protegrity that you think they don’t already know?

Yes, one more thing! Which is that Protegrity has been in this business for a long time. For over 15 years we have been protecting some of the world’s largest and most sensitive datasets, especially including PII/PHI/PCI data frequently encountered in insurance and financial services, with now well over 100 of the Global 2000 in production with our data protection technology. This stuff works. Ultimately, data security is a business risk versus reward proposition, and while it’s great there is so much innovation and investment dollars flowing to so many startups, using proven technology from a vendor with long-term business viability very much helps on the risk side.

CDAOI Insurance

Related posts