Martha Heller
Columnist

Analyzing the business-case approach Perdue Farms takes to derive value from data

Feature
Sep 20, 20237 mins
Agriculture IndustryCIOData Center Management

To CIO Mark Booth and his team, technology is just table stakes to building a data-driven business for this iconic food and agribusiness. To get real value out of data, they’re populating their data lake one business case at a time.

Mark Booth, CIO, Perdue Farms
Credit: Perdue Farms

Martha Heller: What is the transformation currently underway at Perdue Farms?

Mark Booth: We have a growth strategy to improve our business, and to support that, we’re driving a transformation in technology and business processes. We’ve been replacing our old systems, some of which are more than 20 years old, and this has been going very well. But the more challenging work is in making our processes as efficient as possible so we capture the right data in our desire to become a more data-driven business. If your processes aren’t efficient, you’ll capture the wrong data, and you wind up with the wrong insights.

In addition to business process change, what else are you facing to getting value out of data?

Making sure we adhere to the new processes is a challenge in a 103-year-old poultry and agribusiness company. We need to make everyone in the company understand that the data will set us free. With the right insights, we’ll better understand what’s good for the business and what our consumers want.

This is important because all our businesses have seen significant change over the last several years. On the food side, we’ve come out with innovative products, so we’re not just selling chicken anymore. We need the data to understand what additional new products we should produce.

On the agribusiness side we source, purchase, and process agricultural commodities and offer a diverse portfolio of products including grains, soybean meal, blended feed ingredients, and top-quality oils for the food industry to add value to the commodities our customers desire. The data can also help us enrich our commodity products.

How are you populating your data lake?

We’ve decided to take a practical approach, led by Kyle Benning, who runs our data function. CIOs have learned that it’s a big risk to build a data lake and hope “they will come,” because they might not, and a data infrastructure is a big expense. To avoid that, we built our data platform business case by business case.

With our involvement, our business partners create business cases, whether in foods or agribusiness, and these are approved by our business unit governance team. That team makes sure business cases are aligned with our corporate goals. Then our analytics team, an IT group, makes sure we build the data lake in the right sequence. We call this team Information Management because we want our business associates to be the analysts, not IT.

Once a business case has been approved by both the business unit and Information Management teams, we turn the case into a project and put it into production by moving only the data for that business case into the data lake. We have a metrics web page that tells us how much data is in the consumer zone, and what can be accessed, and how much is in the raw zone, which is a data repository that needs to be massaged before it goes in the consumer zone.

How do you educate the business case owners about how to work with the data?

Benning’s team is a center of excellence. He has player-coaches who train the business case teams on data preparation and visualization tools until they become self-sufficient in creating their own end-user dashboards and algorithms. Once the business case is approved, the business case team understands the tools, and IT moves the data into the lake, then our business partners can work with the data on their own. Once they have a solution that gives them the required insights, IT moves it to the consumer zone so it gets governed and cataloged.

By training the business and moving the data, case by case, we started to see a groundswell. Starting small and proving out the value of the data, people get excited, and more departments have become engaged. We have some departments that are now actually looking to hire analytics-savvy business resources.

What is the risk of this case-by-case approach?

When you move in data specific to the business case, you move only in that required data, which is not necessarily all the data, so there can be gaps. As we move from one business case to the next, we need to look back to make sure we close the data gaps. We moved some financial performance data in the data lake, for example, but that wasn’t 100% of the financial data that’ll be required in the end. We’re now working on a project to get the rest of the financial data into the data lake.

I think of it as a snowplow going down the road making sure the path is clear. As we move out of one business into another, to work on a new business case, we have to look back to make sure we move in the data even if it’s not required by a specific business case. As we do this, we know the environment has become self-funding via the business cases.

What’s a real example of an early business case?

One would be with our chicken deboning machines. We wanted to know if the line was optimized. If we run it faster, will we have enough people at the end of the line to keep up with the machine, or if we run the machine too slow, will we have too many people? We got some real savings from moving operational data about the deboning machines into the data lake.

What’s the benefit of populating your data lake this way?

The big benefit is senior manager acceptance of the value of the data. They can see the dashboards they automatically populate, and can realize they’re no longer using paper or waiting to get reports. And because this is their business case, they see this data on their terms. Their experience with the data then gets more people excited and involved.

Describe some other challenges of working with operational data as opposed to enterprise data.

Working with financial data isn’t as complex as working with operational data. For the most part, ERPs are built a certain way to be configured for most processes. We understand that data and how to put it in a data lake. But operational data can be more challenging because we mix financial data with operational data. For the first time, we need to think about how much sensor data should go into the data lake, how it should it be structured, what the standards are for operational technology data, and should the data that shows our production error rates stay at the plant or move into the data lake. Figuring all of that out is difficult work, and we’re still not through it as much as sitting on so much OT data.

What advice do you have for CIOs building a new data capability?

Start small, define your ecosystem, simplify your tech stack, and involve people who have credibility in the business. Yes, you need the right technology to become a data-driven business, but that’s just table stakes. The real drivers are your business partners and the business cases they create. In IT, we sometimes make data too complicated because we start talking bits and bytes, and ones and zeros, and cloud and platforms, but that’s not the point. Use your governance structures to pick a business case that’s very important to the business, nail it, and then go get the next two or three. It will mushroom from there.